Responsive behavior: providing metaknowledge

This is a post that I have been avoiding for awhile because of it’s inherit complexity. The main issue to solve here is, given a query, what is the appropriate response? This area requires using metaknowledge in order to correctly access the right kind of procedural knowledge or the right kind of declarative knowledge. My favorite example of this is from Professor Hofstadter’s example in Gödel, Escher, Bach: when you are asked how many people live in Chicago you refer to your declarative knowledge – i.e. wikipedia (or google) says 2.175 million. However, when you are asked how many chairs are in your house, you can’t look it up so you will picture your house in your mind and go through each room and count them – a procedure to acquire the knowledge.

So far we have talked about implementing declarative knowledge using WolframApi and the Wikipedia Api. Later on I will get to implementing a kind of procedural knowledge using vision to answer questions based on procedures on surroundings and memory. Now we have to give the AI a sort of metaknowledge about the query to determine which kind of knowledge to use.

Basically we will implement a large case-structure to probe what is in the input. The only trick I have discovered so far is to put “common” things last, so that they aren’t always selected immediately in the case-structure.

There is a lot here, but I’m sure it’s mostly self-explanatory. The bits about turning on/off lights and checking the weather and playing music I will get to soon. As you can see I have implemented some bad coding – nesting try/except statements. I found that sometimes the API wouldn’t work (maybe 7% of the time) so this will allow it to retry if it has trouble.

I don’t really like this code because it is long and not so elegant in terms of it’s organization. Please feel free to comment and suggest ways to improve!

Archives

2 comments

Have you considered the following options for looking for answers – going through a folder of files, or DB with lots of text, which can be scanned as well? What about adding Q&A teaching element, so for example, i can login as teacher, and teach the AI some basic questions and answers to IT, thus shortening the cases to be looked in or if that information is specifcaly related to a certain area of knowledge, e.g. management skills, presentation, excel skills and etc?

Yes! I’ve thought about this, but in a slightly different context. I have my own personal chat history from Google / Cell phone records / etc., and I was working on a program that would utilize these questions/answers to provide a AI knowledge (about my personal life). This would be a tiny step for putting one owns life into a computer…
I think your idea may be more practical and also maybe somewhat easier! I’m sure IT centers get the same 10 questions over and over in different variations, which is great for machine learning.