For the Russian-thinking Dushka AI Mind, we have perhaps stumbled upon a way to avoid the hard-coding of noun paradigms and instead to let the Russian AI learn the inflected endings of Russian nouns from its own experience. For example, right now the Russian artificial intelligence (RuAi) fails to recognize the Psi concept #501 БОГ in the following exchange.

Aha! Suddenly it becomes clear that two things are happening. The Psi concept #501 is indeed being recognized at first, but perhaps the provisional-recognition "prc" variable is not being set, and so AudInput calls NewConcept as if the AI were learning a new word instead of recognizing an old word.

Sat.4.FEB.2012 -- Learning Russian Like a Human Child

Now in a very rough way we have trapped for "zad1" in the AudRecog module so as to recognize a noun (БОГА ) with one character of inflection added onto it. Because the noun was indeed recognized, the InStantiate "seqneed" mechanism tagged the noun in the "ruLexicon" with a "dba" of "4" to indicate a direct-object accusative case. In other words, the Russian AI learned a new noun-form as a human child would learn it, that is, from the speech patterns of another speaker of Russian.

In our Dushka Russian AI we have the problem that new verb-forms generated on the fly by the VerbGen module are not being recognized and tagged with critical parameters as they settle into auditory memory. However, it looks as though a verb does get recognized if the "audpsi" tags for the verb in auditory memory extend far back enough to cover the stem of the verb. Therefore, instead of devising ways to bypass the operation of ReEntry calling AudMem, calling AudRecog, we should perhaps instead implement a "backfill" of any verb generated in the VerbGen module to let the "audpsi" tags extend back to the last "pho(neme)" of the verb-stem. Then the "provisional recall" mechanism in AudRecog ought to recognize the verb-form generated by the VerbGen module.

We created a "vip" variable to hold the value of "motjuste" when VerbPhrase calls VerbGen and to transfer the known concept-number of the verb, near the end of the stem in VerbGen, into the provisional "prc" variable for AudRecog. In this way, we got the AI internally to recognize and record verb-forms generated internally by the VerbGen module. However, to get the AI to call the correct verb-forms, we had to modify some recent OldConcept code for deciding what "dba" value to store with a lexical item. Now we have a problem with tagging the "dba" of a simple word like МЕНЯ when it comes in.

We can not rely on the form of МЕНЯ to tell us its "dba" because it could be genitive or accusative. We need to extract clues from the incoming sentence in order to assign the proper "dba" during the storage of МЕНЯ.

Wed.1.FEB.2012 -- Tagging Engrams with Parameters

We can perhaps rely on the "seqneed" mechanism of InStantiate to provide the "dba" parameter for a noun or pronoun entering the mind as user input. (Perhaps the "seqneed" variable should change to a "seqseek" variable for greater clarity.) We may be able to strengthen the use of "seqneed" by adding a kind of "pass-over" when a preposition is encountered, so that the software continues to look for a direct-object noun when a preposition-plus-noun combination is detected and skipped.

Where the InStantiate module tests for a "seqneed" of "5" and encounters a satisfying noun or pronoun to become a "seq" for the verb, we make the assumption that the time "t" identifies the temporal location of the noun or pronoun in both the Psi array and the "ruLexicon" array. We insert two lines of code to first "examine" the Russian lexical array and then to substitute a numeric "4" for the "ru4" flag of the "dba" value. Since the noun or pronoun is going to be the "seq" of the verb, that same noun or pronoun warrants a "dba" of "4" as a direct object that should be in the accusative case. However, we may need to make other arrangements if the verb is intransitive and the noun must be in the nominative as a predicate nominative.

Today in the Dushka Russian AI we begin to address a problem that occurs also in our English AI Mind. Sometimes a verb does not need an object, but the AI needlessly says "ОШИБКА" for "ERROR" after the verb. We need to make it possible for a verb to be used by itself, without either a direct object or a predicate nominative. One way to achieve this goal might be to use the jux flag in the Psi conceptual array to set a flag indicating that the particular instance of the verb needs no object.

We have previously used the "jux" flag mainly to indicate the negation of a verb. If we also use "jux" with a special number to indicate that no object is required, we may have a problem when we wish to indicate both that a verb is negated and that it does not need an object, as in English if we were to say, "He does not play."

One way to get double duty out of the "jux" flag might be to continue using it for negation by inserting the English or Russian concept-number for "NOT" as the value in the "jux" slot, but to make the same value negative to indicate that the verb shall both be negated and shall lack an object, as in, "He does not resemble...."

During user input, we could have a default "jux" setting of minus-one ("-1") that would almost always get an override as soon as a noun or pronoun comes in to be the direct object or the predicate nominative. If the user enters a sentence like "He swims daily" without a direct object, the "jux" flag would remain at minus-one and the idea would be archived as not needing a direct object.

2. Sun.29.JAN.2012 -- Using Parameters to Find Objects

While we work further on the problem of verbs without objects, we should implement the use of parameters in object-selection. First we have a problem where the AI assigns activation-levels to a three-word input in ascending order: 23 28 26. These levels cause the problem that the AI turns the direct object into a subject, typically with an erroneous sentence as a result.
In RuParser, let us see what happens when we comment out a line of code that pays attention to the "ordo" word-ordervariable. Hmm, we get an even more pronounced separation: 20 25 30.

Here we have a sudden idea: We may need to run incoming pronouns through the AudBuffer and the OutBuffer in order unequivocally to assign "dba" tags to them. When we were using separate "audpsi" concept-numbers to recognize different forms of the same pronoun, the software could pinpoint the case of a form. We no longer want different concept-numbers for the same pronoun, because we want parameters like "dba" and "snu" to be able to retrieve correct forms as needed. Using the OutBuffer might give us back the unmistakeable recognition of pronoun forms, but it might also slow down the AI program.

Before we got the idea about using OutBuffer for incoming pronouns, in the OldConcept module we were having some success in testing for "seqneed" and "pos" to set the "dba" at "4=acc" for incoming direct objects. Then we rather riskily tried setting a default "dba" of one for "1=nom" in the same place, so that other tests could change the "dba" as needed. However, we may obtain greater accuracy if we use the OutBuffer.

3. Mon.30.JAN.2012 -- Removing Engram-Gaps From Verbs

Yesterday in the Russian AI we experimented rather drastically with using the "ordo" counter to cause words of input to receive levels of activation on a descending slope, so that the AI would be inclined to generate a sentence of response starting with the same subject as the input. We discovered that the original JavaScript AI in English was not properly keeping track of the "ordo" values, so we made the simple but drastic change of incrementing "ordo" only within OldConcept and NewConcept, both of which are modules where an incoming word must go through the one or the other.

Today we have sidetracked into correcting a problem in the VerbGen module. After input with a fictitious verb, VerbGen was generating a different form of the made-up verb in response, but calls to ReEntry were inserting blank aud-engrams between the verb-stem and the new inflection in the auditory channel. By using if (pho != "") ReEntry() to conditionalize the call to ReEntry for OutBuffer positions b14, b15 and b16, we made VerbGen stop inserting blank auditory engrams. However, there was still a problem, because the AI was making up a new form of the fictitious verb but not recognizing it or assigning a concept-number to it as part of the ReEntry process.

The most glaring problem in the Dushka Russian AI right now is that the AI does not fully activate the subject-pronoun when we type in a short sentence of subject, verb and object. Without a proper subject to provide parameters, the AI fails to select or generate a proper Russian verb-form.

When we type in "люди знают нас" ("People know us"), as an answer we get "ВАМ ЗНАЮТ ТЕБЯ" -- a mishmash of "to you" "they know" "you". In general, the AI seems to be taking the final object entered as input and trying to convert it into the subject for a response.

Thurs.26.JAN.2012 -- Using the "seqneed" Variable

The Russian AI is not setting a Psi "seq" flag when we enter a Russian word as the subject of a following verb. When we inspect the recent 10nov11A.F MindForth code for clues, we discover that in October of 2011 we made major improvements to the method of assigning "seq" tags. We began using the "seqneed" variable as a way of holding off on assigning a "seq" until either the desired verb or the desired noun/pronoun made itself available. However, apparently in the English JavaScript AI we wrote the "seqneed" code only for needing nouns and not yet for needing a verb. No, we did write the code, but it involved avoiding the English auxiliary verb "do", so we accidentally removed the verb-seqneed code from the RuAi. Let us put most of the code back in, and see what happens. Upshot: Once we put the code back into InStantiate, subjects of verbs once again began having a "seq" reference to the verb. The AI even skipped an adverb that we then inserted as a test.

In the ru111229.html version of the Dushka Russian AI we coded the AudBuffer to load Russian characters during SpeechAct and the OutBuffer to move each Russian word into a right-justified position subject to the changing of inflectional endings based on grammatical number and case for nouns, and number and person for verbs. Next we need to determine which forms of a Russian word are ideal for storage in the RuBoot bootstrap sequence.

It seems clear that for feminine nouns like "ruka" for "hand", storage in the singular nominative should suffice, because other forms may be derived by using the OutBuffer to remove the nominative ending "-a" and to substitute oblique endings of any required length.

For regular Russian verbs in the group containing "dumat'" for "think" and "dyelat'" for "do", it should be enough to store the infinitive form in the RuBoot module, because the OutBuffer can be used to create the various forms of the present tense. If a human user inputs such a verb in a non-infinitive form, such as in "ty cheetayesh" for "you read", the OutBuffer can still manipulate the forms without reference to an infinitive. This new ability is important for the learning of new verbs. Since there is no predicting in which form a user will input a new Russian verb, the OutBuffer technique must serve the purpose of creating the verb-forms and of tagging their engrams with the proper parameters of person and number.

Since JavaScript is not a main language for artificial intelligence in robots, our Dushka Russian AI serves only as a proof-of-concept for how to construct a robot AI Mind in a more suitable language. We use JavaScript now because it can display the Russian and because a Netizen can call the AI into being simply by using Internet Explorer to click on the link of the Душка AI Mind.

These notes record the coding of the Russian AI Mind Dushka
in JavaScript for Microsoft Internet Explorer (MSIE).

Mon.26.DEC.2011 -- Creating the OutBuffer

Today in the Dushka Russian AI we will try to create the OutBuffer function to change the declensional ending of a Russian verb.

Tues.27.DEC.2011 -- Right-justifying ЗНАЮ

Although we have created the OutBuffer module to permit the SpeechAct module to hold a Russian verb right-justified in place for a change of inflectional endings on the fly, we are finding it difficult to obtain an "alert" report of the exact contents of the OutBuffer towards the end of a pass through SpeechAct. Into SpeechAct we put a diagnostic "alert" box, and then it appeared that OutBuffer was being called but no data were being revealed.

By testing for the contents when four characters trigger an IF-clause, we have determined that the OutBuffer does indeed take a word from the PhoBuffer and display the word in a right-justifed position. We were able to toggle from English to Russian typing and input the Russian verb for "I know", which soon showed up in a right-justified location when the WhatBe module asked a question about the Russian word. Now we are ready to design code that will intercept a Russian verb being "spoken" and change its inflectional ending on the fly, a feat which we will consider to be a major advance in our creation of a Russian AI Mind.

These notes record the coding of the Russian AI Mind Dushka in JavaScript for Microsoft Internet Explorer (MSIE). The free, open-source Russian AI will grow large enough to demonstrate a proof-of-concept in artificial intelligence, until the intensive computation of thinking and reasoning threatens to slow the MSIE Web browser down to a crawl. To evolve further, the Russian AI Mind must escape to more powerful programming languages on robots or supercomputers.

Thurs.22.DEC.2011 -- Selecting the Bootstrap Vocabulary

We would like soon to implement the diagnostic display so that we may observe the build-up of the innate Russian vocabulary. Therefore we copy the necessary code from the English AI Mind and we troubleshoot until suddenly we observe a diagnostic display. Now the way is clear for us to keep adding Russian words until we have enough innate concepts for the Russian AI to demonstrate thinking in Russian.

The JavaScript artificial intelligence (JSAI) from the AI4U textbook has finally reached a stage where the free strong AI software may be embodied in a seeing robot. Although the JSAI as an AiApp could see out through the camera of a smartphone, the JavaScript programming language is not suitable for controlling the MotorOutput of a robot. The MindForth AI in the Forth programming language is capable of controlling a robot, but there is some question whether software packages such as Open Computer Vision (OpenCV) can be ported into Forth so as to implement VisRecog, or whether perhaps the computer vision tail should wag the AI software dog and either MindForth or the JavaScript AI Mind should be ported into one of the OpenCV languages so as to accelerate the emergence of robotic True AI.

Sun.25.SEP.2011 -- Implementing VisRecog

In JavaScript we create only the stub of the VisRecog module, because we need to show where VisRecog belongs on the robot AI MindGrid in case an enterprising robot-maker or a skilled AI mindmaker decides to take up the Grand Challenge of integrating the thinking AI software with the seeing robot hardware. So first we expand the English bootstrap EnBoot sequence of the JavaScript AI by adding in the English vocabulary words "SEE" and "NOTHING" as a verb to discuss robot vision and a noun or pronoun to serve as the default "nothing" that the stub of VisRecog can actually see. When VisRecog reaches a stage of development commensurate with OpenCV, we may expect the robot visual system to say things like "I see a bird" or "I see you".

When we first started coding the JavaScript artificial intelligence (JSAI) back in anno 2000, we tried to make it cross-browser compatible, especially with Netscape Navigator. Unfortunately, as the artificial Mind quickly became extremely complex, we found that we could not maintain compatibility, and that it was too distracting to try. It was hard enough to code the AI in Microsoft Internet Explorer (MSIE), but at least MSIE gave us the functionality that the AI Mind needed.

Meanwhile the AI Mind has evolved in both jJavaScript and Win32Forth. Sometimes the JSAI was ahead of the Forth AI, and sometimes vice versa. In our efforts to get mental phenomena to work in either programming language, we sometimes veered apart in one language from our current algorithm in the other language. Now we are bringing the AI codebase back into as close a similarity as possible in both MSIE JavaScript and Win32Forth (plus 64-bit iForth). We may not offer cross-browser compatibility, but we are making our free AI source code more understandable by letting Netizens examine each mind-module in either Forth or JavaScript.

Fri.10.JUN.2011 -- Solving the AI Identity Crisis

Today we have been running the AI Mind in both JavaScript and Forth so as to troubleshoot the inability of the JSAI to answer the input question "who are you" properly. The JSAI was responding "I HELP KIDS", which is an idea stored in the knowledge base (KB) of the AI as it comes to life in either Forth or JavaScript. The input query is supposed to activate the concept of "BE" sufficiently to override the activation of the verb "HELP" that comes to mind when the Mind tries to say something about itself. We had to adjust the values in the JSAI NounAct module slightly lower for the creation of a "spike" of spreading activation, so that the "BE" concept would win out over the "HELP" concept in the generation of a thought. We have removed the identity crisis of an AI that could describe itself in terms of doing but not being.

We gradually improve the AI Mind in JavaScript by identifying and combatting the most glaring bug or glitch that pops up when we summon the virtual entity into existence. Any Netizen using MSIE may simply click on a link to the AiMind program and watch the primitive creature start thinking and communicating. The AI would need a robot body and sensors to flesh out its concepts with knowledge of the real world, but we may approach the AI with a Kritik der reinen Vernunft -- as a German philosopher once wrote about "The Critique of Pure Reason." We are building a machine intellect of pure, unfleshed-out reason, able to think with you and to discuss its thought with you. Our process of eliminating each glitch or bug when we notice it, means that the AI Mind has the chance to evolve in two ways. The first AI evolution occurs in these initial offerings of the AI software to our fellow AI enthusiasts. The second AI evolution occurs when the AI propagates to other habitats such as the http://aimind-i.com website. If you are the CEO of a corporate entity, you had better ask around and find out who in your outfit is in charge of keeping up with AI evolution and how many Forthcoders are in your employ.

Am 1. Juni 2011, Mittwoch Morgen in der Eigerwand.
Yesterday at KCLS/RB I decided to Google
"Larry Parr" and see what my old UW college
buddy was doing. It came back that two months
ago Larry had died. The shock has still not
worn off a day later. Larry Parr was one of
the first people I met in my first freshman
year at the University of Washington.
We were in Honors English together
in Balmer Hall just across the yard from
Denny Hall. Professor Elinor Yaggy had
us reading "A Passage to India" by
E. M. Forster. Before class Larry and I
had many a conversation in the sky-
bridge over to Mackenzie Hall. I had
very boring weekends, and I would marvel
at Larry's stories of his wild weekends.

Three years later, during the
http://en.wikipedia.org/wiki/Summer_of_Love
Larry took me to a brand new coffee shop
autochthonously called "The Last Exit" on
Brooklyn Avenue. There Larry told me all
about Ayn Rand and the Objectivist
philosophy and "Atlas Shrugged" and
Governor Reagan of California. Larry was
young and he had the world by the tail
and he would go on to great things that
would make me brag about Larry Parr
to my chessbum father.

Fast forward about ten years and I was
down in the bowels of the U.W. Suzallo
Library, using old microfilms of the
Seattle Times newspaper. For some
reason, Larry Parr was down there
looking at maps. We had both been
in the U.S. Army, and Larry told me
how he had served as a U.S. Army
spook in Germany listening to
Russian soldiers asking "Tam li?
Tam li?" over the radio.

Years later I ran into Larry Parr at
the Last Exit and he told me he had
written a book about local Seattle
chess players. I asked Larry if the
book included my father, who was
once tied for chess champion of
Washington state. Larry assured me
that he had mentioned my father in
the book, but a the U Book Store I
could not find any mention of my
father in Larry's book.

Another ten years later I was walking
around Green Lake in Seattle when
Larry accosted me and harangued me
about those "retromingent" evil-doers
he was always complaining about.
Larry told me that the authorities
had deported him from Malaysia
because of something he wrote as
a journalist. He was waiting for things
to cool down so that he could return to
Malaysia and be re-united with his wife.

I often repeat to other people Larry's
favorite Bobby Fischer story. According
to Larry Parr, Bobby Fischer was playing
in a major chess tournament in the
Caribbean and the news media commentators
were reporting live that the world chess
champion Bobby Fischer had lost his edge
and had begun making a series of blunders
in the game going on. The commentators
were heaping derisive scorn on the has-been
Bobby Fischer whose series of moves looked
to the chess experts like utter folly.
Dis aliter visum.