Thursday, March 17, 2005

> your AI reminds me of an old Czech fairy-tale where a dog and cat> wanted to bake a really tasty cake ;-9, so they mixed all kinds of> food they liked> to eat and baked it.. Of course the result wasn't quite what they expected >;-).

That's not the case.:-)

I know a lot of stuff and I carefully selected features for strong AI.I rejected far more features than I included.And I didn't it because I thought that these rejected features are useless in true AI, in spite that these rejected features are useful for weak AI.

> I think you should start to play with something a bit less challenging> what would help you to see the problem with your AI.

Totally agree.As I said --- I'm working on limited AI. Which is simultaneously:1) Weak AI.2) Few steps toward strong AI.

There are many weak AI applications. Some of weak AIs are steps toward strong AI, most of weak AIs don't contribute almost anything to strong AI.That's why I need to choose limited AI functionality carefully.

Your suggestion below may become a good example of such limited AI. With proper system structure.

But probably I wouldn't work on it in the nearest future because it doesn't have much business sense.======= Jiri's idea =======How about developing a story generator. User would say something like:I want an n-pages long story about [a topic], genre [a genre].Then you could use google etc (to save some coding) and try togenerate a story by connecting some often connected strings.Users could provide the first sentence or two as an initial story trigger.I do not think you would generate a regular 5 page story when usingjust your statistical approach. I think it would be pretty oddmix of strings with pointless storyline = something far from thequality of an average man-made story.===========================

Sunday, March 13, 2005

Ben, this idea is wrong:-----Lojban is far more similar to natural languages in both intent, semantics and syntax than to any of the programming languages. -----

Actually Lojban is closer to programming languages than to natural languages.Structure of Lojban and programming languages is predefined.Structure of natural languages is not predefined. Structure of a natural language is defined by examples of using this natural language. This is the key difference between Lojban and Natural Language.

Since structure of natural language is not predefined, you cannot put language structure into NL parser code. Instead you need to implement system which will learn rules of natural language from massive amount of examples in this natural language.

You are trying to code natural language rules in text parser, aren’t you?That’s why you theoretically can parse Lojban and programming languages, but you cannot properly parse any natural language even theoretically.

If you want properly parse natural language, you need predefine as little rules as possible.I think that natural language parser has to be able to recognize words and phrases.That's all that NL text parser has to be able to do.

All other mechanisms of natural language understanding should be implemented outside the text parser itself.These mechanisms are:- Word dictionary and phrase dictionary (too serve as a link between natural language (words, phrases) and internal memory (concepts).- Relations between concepts and mechanisms which keep these relations up to date.

I think that it's a mistake to teach AI to any language other thannatural language.

Lojban is not a natural language for sure (because it wasn't reallytested for variety of real life communication purposes).

The reasons why strong AI has to be taught to a natural language, not to Lojban:1) If AI understands natural language (NL) then it's a good sign thatthe core AI design is correct and quite close to optimal.If AI cannot learn NL then it's a sign that core AI design is wrong.If AI can learn Lojban --- it proves nothing from strong AI standpoint.There are a lot of VB, Pascal, C#, C++ compilers already. So what?

2) NL understanding has immediate practical sense.Understanding of Jojban has no practical sense.

3) NL text base is huge. Lojban language text base is tiny.

4) Society is "the must" component of intelligence.Huge amount of people speaks/write/read NL.Almost nobody speaks Lojban.

Bottom line:If you spend time/money on design/teaching AI to understand Lojban ---it would be just a waste of your resources. It has neither strategical nor tactical use.

Friday, March 11, 2005

Jiry, you misunderstand what the Logic is about.Logic is not something 100% correct. Logic is a process of building conclusion based on highly probable information (facts and relations between these facts).Under "highly probable" I mean over 90% probability.Since Logic does not operates 100% correct information, logic generates both correct and incorrect answers. In order to find out if logical conclusion is correct we need to test it. That's why experiment is necessary before we can rely on logic conclusion.Let's consider an example of logic process:A) Mary goes to the church.B) People who go to church believe in God.C) Mary believes in GodD) People who believe in God believe in life after death.E) Mary believes in life after death.

Actually:1) We may have wrong knowledge that Mary goes to the church (we could confuse Mary with someone else, or Mary might stop going to the church).2) Not all people who go to church believe in God3) We could make logical mistake assuming that (A & B) result in C.4) Not all people who believe in God believe in life after death.5) We could make logical mistake assuming that (C & D) result in E.

Conclusion #1:Since logic is not reliable, long logical conclusions are typically could be less probable than even non-reliable observations.For instance, if Mary’s husband and Mary’s mother mentioned that Mary doesn’t believe in life after death then we’d better rely on their words more than on our 5 step logical conclusion.

Conclusion #2:Since multi-step logic is unreliable --- multi-step logic is not "the must" component of intelligence. Therefore logic implementation could be skipped in the first strong AIprototypes.Limited AI can function very well without multi-step logic.

Friday, March 04, 2005

Jiry> And try to understand that when testing AI (by letting it to solveJiry> particular problem(s)), you do not need the huge amount of data youJiry> keep talking about. Let's say the relevant stuff takes 10 KB (and itJiry> can take MUCH less in many cases). You can provide 100 KB of dataJiry> (including the relevant stuff) and you can perform lots of testing.Jiry> The solution may be even included in the question (like "What's theJiry> speed of a car which is moving 50 miles per hour?"). There isJiry> absolutely no excuse for a strong AI to miss the right answer in thoseJiry> cases.

Do you mean 100 KB data as the background knowledge is enough for strong AI?Are you kidding?

By the age of 1 year human baby parsed at least terabytes of information. And keeps in his/her memory at least many megabytes of information.

Do you think 1 year old human baby has strong AI with all this knowledge?

Yes, artificial intelligence could have advantage over natural intelligence. AI can be intelligent with less amount of info.But not with 100 KB anyway.100 KB is almost nothing for General Intelligence.

Dennis>> why 4 types of relations are better than one type of relations?

Jiri> Because it better supports human-like thinking. Our mind is workingJiri> with multiple types of relations on the level where reasoning applies.

Our mind is working with far more than 4 types of relations.That's why it's not good idea to implement 4 types of relations. In one hand it's too complex. In another hand it's still not enough.Better approach would be to use one relation which is able to represent all other types of relations.

Thursday, March 03, 2005

>> 1) Could you please give me an example of two words which are used near>> each other, but do not have cause-effect relations?

> I'll give you 6. I'm in a metro train right now and there is a big> message right in front of me, saying: "PLEASE DO NOT LEAN ON DOORS"> What cause(s) and effect(s) do you see within that statement?

Let imagine that strong AI is in reasoning process.But in order to make general reasoning AI needs to have background knowledge (common sense). That's what CyCorp is trying to achieve.Now let's consider what kind of background knowledge can be extracted from statement "PLEASE DO NOT LEAN ON DOORS".(Obviously this knowledge extraction should be made not in the actual decision making time, because huge amount of text should be parsed and our test statement is just one of many millions statement).

Ok, what we know from the test statement:- If you think about "lean" - think about "doors" as one of the options.- If you think about "door" - think about "lean" as one of the options.- If you say "do not" - think about saying "please" to.- If you say "do" - think about saying "please" to.- "Doors" is a possible cause for "Not lean"- "Doors" is a possible cause for "lean"- You "Lean" "on" something.- If you think about "on" - think about "doors" as one of the options.

You can extract more useful information from this sentence.Even "Please" -> "Doors" and "Doors" -> "Please" have some sense. Not much though. :-)Statistical approach would help to find what relations are more important than other.

Do you see my point now?

When it's time to make actual decision, AI would have some common sense database which will provide large, but not endless amount of choices to consider.All these choices would be pre-rated. That would help to prioritize consideration of these choices.

Now let's consider if structure of the main memory should be adjusted in order to transform "Limited AI to Strong AI.I don't see any reason to change memory structure in order to make such transition.Additional mechanisms of updating cause-effect relations would be introduced such as experiment, advanced reading, and "thought experiment". But all these new mechanisms would still use the same main memory.

Tuesday, March 01, 2005

1) Goals defined by operator are even more dangerous.2) You can load data from CYC, it this data wouldn't become knowledge. Therefore it wouldn't be learning. And wouldn't be useful.Goals are still necessary to learn. Only the goals give sense to learning.

3) Why would long question cause "no answer found" result? Quite contrary --- the longer the question, the more links to possible answers could be found.

4)>> Bottom line: "Generalization is not core AI feature".

> It's not a must for AI, but it's a pretty important feature.> It's a must for Strong AI. AI is very limited without that.

- I have ideas about how to implement generalization feature.Would you like to discuss these ideas?- I think that it's not a good idea to implement generalization in the first AI prototype.Do you think that generalization should be implemented in the first AI prototype?

5)> "Ability to logically explain the logic" is just useful for invalid-idea > debugging.> So I recommend to (plan to) support the feature.

All features are useful. The problem is --- when we put too many features into software project --- it's just dies.That's why it's important to correctly prioritize the features.

Do you think that logic should be implemented in the first AI prototype?

50 years of trying to put logic into the first AI prototype proved that it's not very good idea.

6) Reasoning tracking> It's much easier to track "reasons for all the (sub)decisions" > for OO-based AI.

No, it's not easier to track reasoning in AI than in natural intelligent system.Evolution could code such ability. But the evolution didn't cover 100% tracking of reasoning. There are very essential reasons for avoiding 100% reasoning tracking.Such tracking simply makes intelligent system more complex, slower, and therefore very awkward.And intelligent system is very fragile system even without such "tracking improvement".

Bottom line: First AI prototype doesn't need to track process of its own reasoning. Only reasoning outcomes should be tracked.

7) AIML> Your AI works more-less in the AIML manner. It might be fun to play> with, but it's a dead end for serious AI research.> AIML = "Artificial Intelligence Markup Language", used by Alice and> other famous bots.

Does AIML have ability to relate every concept to each other?Do these relations have weights?Does one word correspond to one concept?Is learning process automated in Alice?Is forgetting feature implemented in Alice?

8)>>If I need 1 digit precision, then my AI needs just to remember few hundred>>combinations> searching for stored instances instead of doing real> calculation is a tremendous inefficiency for a PC based AI.

Calculation is faster than search. But... only if you already know that calculation is necessary. How would you know that calculation is necessary when you parse text? The only way --- is find what you have in your memory. So you can just find the answer.

But yes, sometimes required calculations are not that easy. In this case the best approach would be to extract approximate results from the main memory and make precise calculations through math functions.And again, this math functions integration is not top-priority feature. Such feature is necessary for technical tasks, not for basic activity.

>> Intelligence is possible without ability to count.

> Right, but the ability is IMO essential for a good problem solver.

Correct, but you/me/whoever cannot build good problem solver in the first AI prototype anyway.

9) Design is limited, but not dumb> Don't waste time with a dumb_AI design.

Design is not dumb, it's limited. And can be extended with the second AI prototype. Feel the difference.

10) Real life questions> If I say obj1> is above obj2 and then ask if the obj2 is under the obj1 then I expect> the "Yes" answer based on the scenario model the AI generated in its> imagination. Not some statistical junk.

This is not real life question to AI.Far more probable questions are: "here is my resume, please, give me matching openings" or "I'm looking for cell phone with X Y Z features, my friends have P, Q plans, what would you recommend?".

Limited AI can be used for answering these questions.

11) The first AI prototype's target on intelligent jobs market> AI's ability to produce unique and meaningful thoughts. To me, that's> where the AI gets interesting and I think it should be addressed in> the early design stages if you want to design a decent AI..

Humans do all kind of intelligent jobs. Some of them are primitive (like first level tech support), some of them are pretty complex (scientist / software architect / entrepreneur / ...).

It's natural if first AI prototype would try to replace humans on primitive intelligent jobs first. Do you agree?

It's practically impossible to build the first AI prototype which will replace humans on the most advanced intelligent jobs. Agree?

12) "brain design" vs "math calculator"> don't you see that it's a trully desperate attempt to use> our brain for something it has an inappropriate design for? The human> brain is a very poor math-calculator. Let me remind you that your AI> is being designed to run on a very different platform..

Let me remind you that human brain is far better problem solver than any advanced math package.Modern math package is not able to solve any problem without human's help.Human can solve most of the problems without math package.

Think again, what exactly is missing in modern software?Make your conclusion what the core AI features are.

The platform is irrelevant here ---So what that you can relatively easy to add calculator feature to the AI. The calculator feature is not critical to intelligence at all. Therefore it would just make the first AI prototype more awkward and more time consuming in development.Do you want that?

13) Aplicability of math skills to real-life problems>>> For example, my AI can learn the Pythagoras Theorem: a^2 + b^2 = c^2.>> How would you reuse this math ability in decision making process like:>> "finding electrical power provider in my neighborhood"?

> I do not think it would be useful for that purpose (even though a> powerful AI could make a different conclusion in a particular> scenario). The point is that general algorithms are useful in many> cases where particular instance of the algorithm based solution is not> useful at all.

Do you mean that you have some general algorithm which allows to solve both "Pythagoras Theorem" and "finding electrical power provider in my neighborhood" question?What is this general algorithm about?

14) Advanced Search> I do not know how exactly google sorts the results but it seems to> have some useful algorithms for updating the weights. Are you sure> your results would be very different?

Yes, they would be different:1) Google excludes results which doesn't have exact match2) Google doesn't work with long requests3) Google has limited ability to understand natural language4) Google doesn't follow interactive discussion with the userI have some ideas how to improve final search results. But the first step would be still search on Google :-)Because of performance and information gathering issues.

> Since you work on a dumb AI which IMO> does not have a good potential to become strong AI, the related> discussion is a low priority to me.

Again, it's not dumb. It's limited because it's just the first prototype.

Do you prefer waterfall development process or Rapid Application Development (RAD) in software development?What about your preferences in research and development?