A technical blog which contains personal opinions and not those of my employer. Major topic areas will include the Python language and other technologies of note. This blog contains subjective opinions. Any resemblance to facts is entirely co-incidental.

Tuesday, March 30, 2010

"My old pal Harry and I are walking in the park, improvising like two jazz musicians - except we're playing with words not melodies. He throws out a line. I have a comeback. He does a riff on my response. Pretty soon we're laughing so hard we're crying. Eventually we collapse, exhausted, on a park bench.

"That was amazing, Harry," I say, "Why don't we do it more often?".

I'm just being social, not really expecting a response, but Harry takes my question seriously. He leans closer, lowering his voice, like he's confiding in me."

Thursday, March 18, 2010

I have been reading my AI textbook again, plus Daniel Dennett's frankly *brilliant* book "Darwin's Dangerous Idea" and I'm afraid it has sparked my overactive imagination!! If you read one serious book this year, make it that one.

Suppose you tried to build an AI entity suchly...

Suppose you start with a network of nodes. Each node is either an axiom node or a composite node. An axiom node is a node which represents something believed to be absolutely true, and un-decomposable at the current level of abstraction. A composite node is one which is not an axiom node. Composite nodes may be grounded (derived from only axiom nodes and other composite nodes) or un-grounded (only partially derived from axiom nodes and other composite nodes). The 'somethings' in the nodes may be facts as we know them (propositions about the objective world, say) or may be undescribed non-propositional nodes which are a part of learned relationships.

One could "imagine" things by overlaying a supposition network over the knowledge network, perhaps using an inheritance structure (i.e. the imagined world derives from the real world but some subnetworks truth propositions are deemed to be different).

Input streams and output streams are used to embody the agent. As such, the agent is constantly writing new observation nodes into its knowledge network.

Over this knowledge network, rule processes run. These rule processes are also stored in the knowledge network, but are marked as rule processes. A simple rule process would be one which could enforce simple truth relationships. (i.e. If A --> B and A is true, then make B true).

Nodes would be scanned for identity (Node A is Node B if node A and node B are sufficiently indistinguishable with high confidence) representing network simplification.

More complex rule processes could be "imagined" and run over the imagination network to evaluate their performance. In this way, imagination allows the entity to test potential rules for their truth value.

The model scales to the extent that the network can be decomposed (i.e. the 'locality' factor of the network with respect to the problem at hand). For example, the application of simple rules which operate on say 3 nodes would be parallelisable and scale well, while complex rules which say operates on all nodes to a depth of N and branching factor of B would not parallelise as effectively.

The fundamental mode of processing would be a build-and-test model of imagination where new knowledge networks are imagined then evaluated, and then incorporated if they perform better. This makes the system a memetic evolutionary system, which seems to me to be a prerequisite for sustainable machine learning.

Learning success comes through untrained re-enforcement based on assessment of expectations of future observational input. Because the entity is based on input *streams* and output *streams*, then even non-action is a choice of the agent.

All concepts, then, are either grounded concepts (fully understood and linked to base phenomena) or ungrounded concepts (not fully decomposable). I would also suggest that truth generally be a real-varying amount, but that Absolutely False and Absolutely True also be allowable values.

This model supports "making assumptions" by building an imagination network where, for example, all things which are believed to be 98% true are taken to be absolutely true, then the results evaluated for performance. Experiments can be run in which nodes can be tested for unification, or a node could be split into two nodes then learning and assessment re-performed.

Of course, I haven't even addressed any questions about the initial state of the system, how the learning algorithms actually work (how is knowledge propagated and how is re-enforcement applied), whether the AI needs multiple conceptual subnets, how current standard problem-solving techniques might be integrated etc etc. But that's okay, this is my blog and I'm just rambling.