My AI chatbot, called AiMind.html in JavaScript and MindForth in Forth, is tangentially approaching consciousness because I am working on a stage I call “self-referential thought”. On 5 September 2010 I made a minor breakthrough when I chanced upon the idea of using “neural inhibition” as a technique to permit the AiMind to respond exhaustively from its knowledge base (KB) to human user queries. If I ask my AiMind “what are you”, it now states all the facts that it knows about itself, and each fact is immediately _inhibited_ inside the software, so that the next fact may rise in activation and serve as the next response to the query. This discussion of self by the AiMind has a bearing on consciousness, because the artificially intelligent chatbot may gradually become aware of its own nature and its relationship to the human users.

Would you elaborate on your description of consciousness and describe how you’ll know that your chatbot has finally arrived (consciousness)? I read the link above, thanks Andrew, and read about that perspective. I’m curious of what you mean by this.

My AI chatbot, called AiMind.html in JavaScript and MindForth in Forth, is tangentially approaching consciousness because I am working on a stage I call “self-referential thought”. On 5 September 2010 I made a minor breakthrough when I chanced upon the idea of using “neural inhibition” as a technique to permit the AiMind to respond exhaustively from its knowledge base (KB) to human user queries. If I ask my AiMind “what are you”, it now states all the facts that it knows about itself, and each fact is immediately _inhibited_ inside the software, so that the next fact may rise in activation and serve as the next response to the query. This discussion of self by the AiMind has a bearing on consciousness, because the artificially intelligent chatbot may gradually become aware of its own nature and its relationship to the human users.

It sounds like you’re saying the chatbot has learned to recognize that it has already said something. How does this help the bot become aware of its own nature? Does it use its knowledge base to draw conclusions about itself? In other words, does it add facts about itself based on what it learns about other things?

Would you elaborate on your description of consciousness and describe how you’ll know that your chatbot has finally arrived (consciousness)? [...]

Regards,
Chuck

For my description please see http://code.google.com/p/mindforth/wiki/ConSciousness and also http://code.google.com/p/mindforth/wiki/SubConscious—both of which figure strongly in my AI “chatbot/Mind” programing. I read the NYT Tononi article on consciousness very carefully when it was published a week ago, and I recall thinking that it was quite different from my own simplistic idea of consciousness, which is that consciousness is basically the searchlight of attention, and that the _illusion_ of consciousness is itself the _essence_ of consciousness. In other words, if you can fool an entity into thinking that it is conscious, then it is in fact conscious.

As for how I will know that my Mind-chatbot has finally arrived at consciousness, it will be in the same way that we acknowledge consciousness in our fellow human beings: by report. If my AI chatbot begins to talk about its own existence and its own consciousness, then I will assume that it is doing a Cartesian “Cogito ergo sum” scenario. Please let me elaborate further in the next response below.

[...] If I ask my AiMind “what are you”, it now states all the facts that it knows about itself, and each fact is immediately _inhibited_ inside the software, so that the next fact may rise in activation and serve as the next response to the query. This discussion of self by the AiMind has a bearing on consciousness, because the artificially intelligent chatbot may gradually become aware of its own nature and its relationship to the human users.

It sounds like you’re saying the chatbot has learned to recognize that it has already said something. How does this help the bot become aware of its own nature? Does it use its knowledge base to draw conclusions about itself? In other words, does it add facts about itself based on what it learns about other things?

I am not “saying the chatbot has learned to recognize that it has already said something”, because the Mind-chatbot is still just learning to mine its knowledge-base (KB) for factual tidbits to say about itself. It has a self-concept of “I” and inside the KB it can find transitive or intransitive verbs (e.g., “am”) which are validly associated with the ego concept in such a way as to constitute knowledge about itself. In this month of September 2010, the AI chatbot is becoming able to answer a query like “What are you?” with the recall and utterance of exhaustively all be-verb statements about itself, contained in the knowledge base. The trick (and the difficulty) lies in orchestrating and coordinating the conceptual activations (yes, the AI has concepts) so that any pertinent question can be asked at any time. For example, today I was asking both “What are you?” and “What am I?” at the same time as I was adding extra tidibits to the KB, because the AI needs to be able to retrieve both the innate KB items and the new KB items entered at any time by a human user.

I am not yet sure how to “help the bot become aware of its own nature”. Once the conceptual activations achieve a sort of bulletproof robustness, then I hope to devise cognitive stratagems to demonstrate to the AI chatbot that it exists as an entity separate from the world around it. On the Google Code http://code.google.com/p/mindforth/wiki/MileStones page I indicate that I am currently working on “self-referential thought” for the AI Mind. It may be possible to turn the computer keyboard into a sense organ for the AI Mind, so that we can ask it if it sense this or that keystroke. It may also be necessary to embody the Mind in a physical robot with a sensorium through which it will become aware of both self and surroundings.

The AI does not yet “draw conclusions about itself” and it does not “add facts about itself based on what it learns about other things”, but I certainly hope and plan to getting into “Is-a” ontologies inside the AI, so that, if the AI knows facts about robots, and we tell the AI that it is a robot, then it should be able to make a few assumptions about its own nature as a robot.

I am running out of time here at a public library terminal, but I would like to mention that http://www.scn.org/~mentifex/AiMind.html (for MSIE) is the JavaScript tutorial version, which lags somewhat behind the Forth version, but is easier to run. Bye to all for now. -Arthur

I posted two replies earlier today, in which I failed to give http://www.scn.org/~mentifex/mfpj.html as the URL of my latest work, which bears tangentially on AI consciousness. By the way, Erwin Van Lun, thank you for hosting a friendly, amicable place in which all chatbot developers may meet and share ideas in a mutually beneficial atmosphere. -Arthur

By the way, Erwin Van Lun, thank you for hosting a friendly, amicable place in which all chatbot developers may meet and share ideas in a mutually beneficial atmosphere. -Arthur

You’re welcome Arthur. Always nice to hear such friendly feedback. Did you know we only started formally in March?

Arthur T Murray - Sep 27, 2010:

the _illusion_ of consciousness is itself the _essence_ of consciousness. In other words, if you can fool an entity into thinking that it is conscious, then it is in fact conscious.

Imagine a time where chatbots are having intelligent dialogs with humans. Chatbots learning all the time from their human counterparts, trying to copy their behaviour. Not only be repeating words, but also by pronoucing words. Not only verbal, but also in behaviour, in movements.

In this constant proces of copying, the chatbots learns fast, very fast. But it also discovers that he can’t reprodcue what other ‘humans’ and saying, but example they are singing opera’s, and his speech synthesier can’t handle that. He starts to realize he is different. He start to realize that his own ‘body’ has limitations.

And then the real intelligence comes in: how can he improve himself? Adding a better speech synthesis card.

By the time he orders a speech synthesis card from the budget he got from his human owner, and a humanoid robot installs the card, it would say, that’s the time we can really talk about articificial intelligence.

We have stumbled into a minor breakthrough in
our Mentifex AI coding. Last January (2010)
in MindForth we were coding elaborate schemes
to answer who-queries and what-queries in the AI.
Then on 5 September 2010 we developed a technique
of using neural inhibition to simply answer the
same what-queries for which we had written
over-complicated code in January of 2010.
We wanted to dismantle the complicated query-code,
but we did not want to lose any of the improvements
and advances that we had meanwhile incorporated
into the code-base along with the complex
query-response code. We decided to keep on coding
and to remove one small item at a time from the
complicated query-code. Then we decided to bring
the JavaScript AI (JSAI) up on a par with MindForth.

In coding the JSAI, we wished that we could keep
just the query-subject variable from the overly
complicated query-response code. It seemed a
shame to work so many hours on query-response
in January and then to abandon all the fruit of
such hard work except for the variable, but now
we see an AI breakthrough shining on the horizon.
If we use “qusub” as the new name for a query-subject
variable, we can start tagging each emerging
thought-subject and each re-activated KbTraversal
concept as a provisional “qusub”, holding onto the
“qusub” for one cycle of thought and not caring
whether the “qusub” concept is actually used as
the subject of a query. It is as if all
thought-subjects are like honeybee eggs with
the potential to mature into queens, depending on
whether or not they are fed royal jelly. Likewise,
each former thought-subject may or may not mature
into the linguistic subject of a query-thought,
depending on whether or not the dynamics of the
AI Mind require a query-subject. If each briefly
dominant thought-subject is tagged as both the
“subjold” old subject and the provisional “qusub”
query-subject, then our AI Mind software becomes
implicitly and inherently more powerful and more
pregnant with possibilities than we ever imagined
it would be.

We note in passing that we have devised a way to
tag subject-concepts not by encumbering them
internally, but by referencing them externally.
Each subject-concept is momentarily and
provisionally a “subjold” concept and a “qusub” concept,
whether or not any use is made of that status.
When Netizens say “and then something magical occurs”,
this hidden power of AI concepts is perhaps the
magic being alluded to.

Sat.2.OCT.2010—Debugging the WhoBe Glitches

By inserting quite a few “alert” messages, we have
determined that the JSAI was saying “WHAT” as its
first utterance because some old code at the end of
NounPhrase was directing the utterance of 54=WHAT
when NounPhrase could find no candidate concept.
Instead of just commenting out the offending code,
we have added the word 109=HELLO at the end of the
EnBoot sequence and mutatis mutandis changed the
NounPhrase code to say “HELLO” instead of “WHAT”.
This method is a rather clumsy way of getting the
AI Mind to say “HELLO” to human users, but at least
it is a start.

Sat.2.OCT.2010—Flushing out the Blank “aud” Fetch

By inserting a diagnostic alert before every SpeechAct
call, we have traced the origin of blank “aud” fetches
to the end of the BeVerb module. There we simply
knocked out the SpeechAct call, and the AI no longer
created empty auditory word-stretches. Next we used
the new “qusub” query-subject variable in WhoBe to
cause the AI to ask much more sensible WhoBe questions,
because the “qusub” variable was retaining the proper
subject for enquiry.

If you visited the JSAI Mind page and did not
type anything in, the AI tried to engage you
by saying “HELLO” and by asking
“WHO ARE YOU”. Actually, it activates
the “YOU” concept and does not find any
knowledge of “you” within itself, so an
activation threshold test switches or shunts
the program flow into a “WhoBe” module
of asking who you are.

The JavaScript AI is a tutorial version of the
much more ambitious MindForth AI—whose
installation target is autonomous robots.

Running MindForth involves the two technical steps
of downloading a particular version of Win32Forth
and of loading the AI source code into Win32Forth.
Therefore I keep working on the JavaScript Mind
which can be run simply by clicking on a link.

My near-term goal for the Forth and JavaScript AI
programs is to let them receive facts about, say,
you, from the user and then have them parrot
back those facts exhaustively in response to
user queries, such as “Who am I?” or “What
do cats eat?” The answers will not be a database
look-up, but will be associative conceptual thinking.

As I code my AI Minds in Forth or JavaScript, I
record my work-steps electronically in a
programming journal, a kind of “AI Lab Notes”.
Since something I posted in a consciousness
thread was moved here to start an AiMind thread,
I figured I might post journal entries here in
continuation of the original thread.

My near-term goal for the Forth and JavaScript AI
programs is to let them receive facts about, say,
you, from the user and then have them parrot
back those facts exhaustively in response to
user queries, such as “Who am I?” or “What
do cats eat?” The answers will not be a database
look-up, but will be associative conceptual thinking.

I would think that a database lookup would be necessary. I’m not sure I understand ‘associative conceptual thinking’. This implies that the information received from the human is not stored in a database. I may be wrong. I’m curious how you would save this data entered by the human. E.g.

Bot: Who are you?
Human: My name is Jim.

How/where is the name ‘Jim’ saved and associated with ‘name’....without a database.

Storing each word as a concept rather than as a database
entry permits the word to have different instances and
therefore different relationships over time. For instance, http://mind.sourceforge.net/spredact.html
has a diagram that shows how the concept of the verb
“eat” can associate backwards to different subjects
and forwards to different direct objects over time.

> This implies that the information received from the human
> is not stored in a database. I may be wrong. I’m curious
> how you would save this data entered by the human. E.g.
>
> Bot: Who are you?
> Human: My name is Jim.
>
> How/where is the name ‘Jim’ saved and associated with
> ‘name’....without a database.
>
> Regards,
> Chuck

Each word in the AI Mind, such as “name” or “Jim”,
is stored on three different levels. As a sequence
of phonemes, the word “J-I-M” is stored in the
time-sequential array of the auditory memory channel.
This storage of “JIM” is visible if one runs the http://www.scn.org/~mentifex/AiMind.html
program in “Diagnostic” mode by clicking on
the JavaScript “Diagnostic” checkbox, as I
have just now done and copied the results:

In the data recorded above, “MY” is a known
word with Psi concept number 94. “IS is also
a known word, with Psi concept number 66.
“NAME” is a new word to the AI, so it gets
assigned “110” as the next available identifying
number for a deep Psi concept. Likewise, “JIM”
is a new word to the AI, so it is assigned
“111” as the sequentially next available
Psi concept number for the next new word.

To sum up the answer to your basic question about
not using a database to store words, the end result
may be the same as a database, but the AI Mind
implementation adheres strictly to the AI theory
of storing concepts separately from word-engrams.

Hi Art,
I understand that you’ve included additional human-like modules such as hearing to
your model. I also understand how your model assigns a numerical value to various
words. I believe the numbers are simply a way of abstracting things to a
concept.

Here’s what I understand regarding people and the real world.

* A child smells, tastes, feels, hears, and sees the world. Without knowledge
of speech or writing they store concepts using sensory data. For example, a
‘flower’ produces 5 inputs that is stored in the brain as a complex pattern.

* Over the next few years the toddler learns to associate spoken words with
these concepts. So words they hear trigger the memory of a flower based upon
senses…and they can communicate a concept with a word.

* After several more years the youngster can translate these concepts and their
spoken aliases into written language.

The problem inherent with chat bots, I think, is that we are trying to emulate
human conversation ‘two levels removed’ from the actual concept. That is, we only
try to use the written language.

How does your model deal with this issue…of using written language that is removed
from real world concepts defined by sensory data? How do you simulate these senses?

> I also understand how your model assigns a numerical
> value to various words. I believe the numbers are
> simply a way of abstracting things to a concept.

Yes, each “Psi” concept number is the software stand-in
for a long neuronal fiber as assumed in the theory of mind.
Since I can not actually create fibers in software, I
assign a number to each concept and pretend it is a long
fiber making synaptic connections across associative tags
to other concept-fibers on the mindgrid. Two of my Psi
variables are “fex” and “fin”, for fiber-out and fiber-in,
meaning, going out of the concept-fiber, and going in.
When I just now typed “you are chuck” into the JSAI at http://www.scn.org/~mentifex/AiMind.html
with the checkbox checked for “Diagnostic” mode,
the word “you” appeared with a fiber-out “fex” of
56, which means that the AI activates concept #56
whenever it is addressing an external person as “you”.
The same word “you”, coming into the AI from outside,
has a fiber-in “fin” value automatically assigned as 50,
which is the 50=I self-concept, so that saying “you”
to the AI activates its “I” concept and it thinks as “I”.

> Here’s what I understand regarding people and the real world.
>
> * A child smells,

> For example, a ‘flower’ produces 5 inputs that is
> stored in the brain as a complex pattern.
>
> * Over the next few years the toddler learns to
> associate spoken words with these concepts. So
> words they hear trigger the memory of a flower
> based upon senses…and they can communicate a
> concept with a word.
>
> * After several more years the youngster can
> translate these concepts and their spoken aliases
> into written language.
>
> The problem inherent with chat bots, I think,
> is that we are trying to emulate human conversation
> ‘two levels removed’ from the actual concept.
> That is, we only try to use the written language.
>
> How does your model deal with this issue…of using
> written language that is removed from real world
> concepts defined by sensory data?

Although MindForth seems to be using written language,
it actually treats each alphabetic ASCII character
as if it were a phoneme (using the variable “pho”).

http://cyborg.blogspot.com/2010/05/audrecog.html
(the AudRecog auditory recognition module) is probably
very strange-looking to non-AI programmers, who
might typically use simple string-matching to
process words into the AI. MindForth uses an
extremely elaborate system of quasi-neuronal
activation for pattern-matching in recognition of
words. It then uses “differential activation” to
recognize stems and other subwords within a word.

> How do you simulate these senses?

MindForth currently simulates only the sense of
hearing. The sensorium of other senses and the
MotorOutput module are “stubbed in” so that
robotmakers and others, who might like to work
on sensory input or motor output, may see in
advance where to place their code inside the AI.

> Also, is it proper to refer to your model as a bot?

In two senses of the word “bot”, it is indeed
proper to refer to the MindForth AI Mind as a bot.
Since intelligent robots are my installation target,
in Transcript display mode the AI Mind shows
Human:
Robot:
as the conversants, so as to encourage the idea that
the artificial intelligence belongs inside a robot.
Because the AI Mind converses, it is also a chatbot.

After more than eight years of status quo,
we have removed the call from ReActivate to
SpreadAct, and suddenly our MindForth AI
does not seem to suffer so much from stray
activations. The new regime will take some
getting used to. We must keep in mind that
NounAct and VerbAct are taking over from http://code.google.com/p/mindforth/wiki/ReActivate
ReActivate the job of calling SpreadAct http://code.google.com/p/mindforth/wiki/SpreadAct
but only for nouns and verbs that have been
selected to play a role in a sentence of thought.
We always have the option of reintroducing the call
to SpreadAct if we determine that there is a need
for a modicum of background activation on all
concepts that are recently being thought about.

We still want to determine why our diagnostic
reports do not show build-ups of activation on a
pre-slosh-over verb-node and the actual slosh-over
activation being carried by a “spike” from the
verb-node to the direct object.

In VerbAct, the initial activation should come
from whatever activation is already on the verb-node,
after it has received a “spike” of activation from http://code.google.com/p/mindforth/wiki/NounAct
NounAct. The “verbval” value is indeed declared
in the VerbPhrase module.

During a two-word KB-query like “dogs eat…?”,
a verb-node of “EAT” wins selection during the
“DOGS EAT…” response and imparts a “verbval”
level from VerbPhrase into VerbAct, but it does
not matter very much how large the “verbval” is.
The subject-word “DOG” gets re-activated to an
equal value on all nodes, but ReActivate no longer
calls SpreadAct to pass a “spike” on to verb-nodes.
It is only during response-generation that NounAct
sends a “spike” from each subject-node into SpreadAct
for each “seqpsi” verb-node, but not to verb-nodes
with a different subject.

Perhaps NounAct should put a specific, non-cumulative
activation on all the nodes of a noun. Then an equal
“spike” can be sent to all associated verbs. But VerbAct http://code.google.com/p/mindforth/wiki/VerbAct
should only put cumulative activation on verb-nodes, and
should send non-equal “spikes” to direct-object noun-nodes.

Fri.15.OCT.2010—Non-Uniform Spiking Slosh-Over

By means of some rather wild coding, yesterday in
14oct10A.F we finally achieved and visibly demonstrated
true spike-borne activational slosh-over from the combined
activations of subject and verb to the correct direct
object in a thought generated as a subject-verb-object
(SVO) sentence. We then cleaned up the code by commenting
out the diagnostic messages and we uploaded the AI Mind
as 14oct10B.F(orth) to the Web.

We so wildly changed settings and pre-conditions that
the conceptual activations still showed a tendency to
get out of whack and let invalid associations be asserted.
No problem. With activations on verb-nodes no longer being
pumped up to outrageously high values, there was sometimes
not enough activation on verb-nodes to carry a thought,
and a question was asked about a subject-concept instead
of a thought being generated. However, in our on-screen
clusters of diagnostic reports we saw the genuine slosh-over
where VerbAct was sending out different “spikes” from
different verb-nodes into the SpreadAct module, so that
valid and correct direct objects would be activated so as
to win inclusion in a thought, while other recorded objects
of the same verb but for different subjects, would fail
to garner enough activation to be selected as objects.
As long as we preserve the hard-won functionality of
slosh-over, we may further tweak our AI source code
towards our target of a robust, bullet-proof artificial
intelligence. See you at the Singularity.