Unity3D Game World Goals

THIS PAGE IS OBSOLETE

This page gathers a list of behaviors that we would like to see the OpenCog controlled agent carry out in the Unity3D game world, and that seem feasible based on AI that [as of September 2013] is already "mostly developed and somewhat working" in OpenCog (even if not fully connected to the game world or tested in that context). It should be considered a "living document" at this stage (Sep 2, 2013), though it may be frozen at a certain point and used to generate a list of hard requirements.

As a proposed software metric tool, when desired behavior #1 is near completion, it should be possible to observe via screencast at justin.tv. Unity3D players and OpenCog Embodiment backends, from the Buildbot, will run on a loop on this channel, reporting various metrics before, during, and after continuous test runs. Choosing metrics and designing tests should be a part of each task listed here.

The page also discusses what AI methods seem to be needed to acheive these behaviors.

NOTE: The goal is not to engineer a specific list of learning behaviors as hard-coded capabilities. Rather, the goal is to make OpenCog able to carry out these AND OTHER QUALITATIVELY SIMILAR TASKS in the game world, via its "somewhat general" learning capability.

List of Desired Behaviors

1) The AI learns to collect a bunch of batteries (or energy sources) in one place. For example, suppose that: 1) Other characters tend to grab batteries when the AI is not near them, but tend not to grab batteries when the AI is near them, 2) Other characters tend to collect batteries during the day but not at night..... Then the AI should learn to hunt for batteries during the night, and store them somewhere -- and then guard them during the day.

2) The AI should be able to learn who to steal batteries from. Suppose there are two other characters, X and Y. Both X and Y have hoards of batteries. When the AI steals batteries from somebody's hoard, that somebody tends to chase the AI. When the AI gets caught by somebody chasing him, this wastes his energy, and sometimes results in him losing his battery. Suppose Y runs faster than the AI on average; but the AI runs faster than X on average. Then the AI should learn to steal batteries from X rather than from Y.

3)
The AI should be able to learn to build a wall to keep thieves away. Suppose the AI has hoarded some batteries, and has noticed that other agents want to steal his batteries. Then, suppose X, who likes to steal the AI's batteries, is observed by the AI to be UNABLE to climb walls more than 2 blocks tall. But suppose that the AI is able to jump or climb over walls 3 blocks tall. Then the AI should figure out to build a wall around his hoard of blocks, to keep X out.

4)
Suppose that X has a hoard of batteries, and occasionally gives them to others. Suppose that Y often approaches X, asks "Please give me some batteries", and then gets some batteries. The AI should be able to learn to approach X and ask "Please give me some batteries."

5)
Suppose that the other characters, X and Y, tend to smile when they have lots of batteries in their hoards. Then, the AI should be able to recognize this, and when it sees a certain character smiling, it should figure out that getting a battery from X or Y (or one of their hoards) is a good idea.

6)
The AI should be able to figure out when asking vs. grabbing-and-running is a better strategy for getting a battery from another character. For instance, if X says yes more often than Y, this makes asking more worthwhile where X is concerned than Y. If Y runs slower than X, this makes grabbing and running more worthwhile where Y is concerned.

7)
Suppose there is a certain block (say, a purple block) that the AI can move, but the other characters cannot move (and the AI observes them trying). Suppose that neither the AI nor any of the other characters can easily climb a high wall. Then, the AI should figure out to build a wall around his batteries, and make a doorway in the wall out of purple blocks (i.e. a section of the wall made of purple blocks, which the AI can move but the other characters cannot).

8)
Suppose another character X tells the AI what it is doing, e.g. "You are walking", "You are building with blocks", "You are building with red blocks", "You are near a tree," etc. Then the AI should be able to answer simple questions based on this information it's been told, e.g. "What are you doing?" ==> "I am walking" ... "What are you near?" ==> "I am near a tree" ....

9)
Suppose another character X indicates objects, e.g. by pointing to a tree and saying "tree", pointing to a battery and saying "battery," etc. Then the AI should be able to learn these word-object associations, so that e.g. when it's asked "What is that?" by a character pointing at a tree, it can reply "A tree."

10)
Deception! Suppose that character X is observed to look for batteries in trees that have red boxes next to them. Then the character could put red boxes next to some trees in order to deceive X into looking for food near these trees -- so that the character can find food in other places.

Thoughts on AI Methods Needed to Achieve Desired Behaviors

Clustering

Seems a priority

For a variety of purposes we will need a powerful clustering agent, able to scan the Atomspace and form categories consisting of Atoms that are highly similar to each other.

Further, we will need to form clusters based on paying special attention to particular features of Atoms, e.g. spatial location based clustering (for finding groups of similar entities)

I am not sure what clustering algorithm to use. But I am tempted to make use of the dimensional embedding code that's been written for the Atomspace, and use EM clustering in the embedding space. I have been struck repeatedly lately by how much better EM works than k-means.

MOSES can also be used as a clustering algorithm, and should be very effective, but also slow, so probably this should not be our only method.

Pertinent examples of clustering...

Simple example: how does OpenCog know when another agent has accumulated a "bunch" of batteries.... It would be convenient for it to form a ConceptNode reflecting a set of Atoms that all have similar type and similar spatial location (i.e. a bunch of similar things all close together)...

If we had 10 NPCs rather than just 2, we could ask: how does OpenCog distinguish "fast runners" from "slow runners"...? This could be done via clustering NPC, where the "speed" property would be the main distinguisher between the NPCs in the two categories

What is a "wall"? A wall is a certain set of "block groups" that share some properties. But how would it be isolated as a ConceptNode? There are lots of ways this could be done, but if there were 10 walls in the world and 50 other blocks groups, clustering could potentially identify a cluster of "walls" which all have similar properties. (MOSES, launched as a supervised learning algorithm, could then be used to find other properties of the "wall" category...)

Subgraph Mining

Seems a priority (at least in a simple form)

Simple subgraph mining will be very useful for these behaviors.

It seems we could probably make do with a sub(hyper)graph miner that is

scalable to a moderately large in-RAM Atomspace

capable of online learning in near real time

perhaps restricted to simple patterns, e.g. ignoring variables (or using variables in only a very simple way)

A simple miner like this could find common conjunctions of Atoms, which is basically what's needed for the behaviors above.

Of course we would want to make this miner extensible later to more sophisticated mining of patterns containing multiple variables etc.

PLN

We will need simple backward as well as forward chaining, involving a full spectrum of link types.

It's unclear how complex the variable binding in the backward chainer will need to get.

Integration of PLN with Attention Allocation for inference control will be crucial.

MOSES

Seems not a priority (though needed for more complex behaviors/situations)

MOSES could be used here for learning predicates to imply goals. It would serve a similar function to subgraph mining, but could learn predicates representing arbitrary (generally small) Boolean functions rather than (as in the case of simple subgraph mining) just conjunctions.

Note this is different than using MOSES to learn behavioral programs. In the case suggested here, it might be that none of the Atoms in the predicates learned involve actions.

One could also use MOSES to model the behavior of other characters in the world, or for clustering.

AtomSpace Population

Seems a priority (and simple)

We may want to put a random sample of spatial, temporal relations between objects in the world, in the Atomspace, to enable comparison of relations.

Spatial & Temporal Predicate Evaluation

Seems a priority

We will need automatic, recurrent evaluation of temporal/spatial links between IMPORTANT entities in the world (represented in the Atomspace)

We have fuzzy formulas for temporal predicates, in Dario's code for temporal inference. We may need something similar for spatial predicates, e.g. 3D-RCC predicates.

Attention Allocation

Seems a priority (but the current code appears adequte, it just needs testing and maybe tweaking)

Hebbian link formation will be critical for binding together entities that are important at the same time as things that are important to the agent (batteries, its own body, etc.) …

STI will be critical for guiding PLN to avoid combinatorial explosion.

Things that change will need to get STI boosted. Things that are noted to be surprising in some way, e.g. newly formed clusters with high cluster quality, should also get STI boosted.

Modeling Pathfinding

Seems not a priority (though will be needed for more complex situations)

Eventually it will be useful for the agent to be able to model whether location X is reachable by agent A, give A's current location and A's capabilities. This can be done in some cases using PLN or MOSES based on prior observations of A; or in other cases by using pathfinding (or planning) within a headless Unity world, to emulate what the other agent A would do if using the pathfinder/planner.

This seems not to be strictly necessary for simple initial versions of the tasks above.

Specific Tasks (Must-Have)

This is intended as a list of "sizeable, critical tasks" that must be undertaken to realize the above behaviors in OpenCog. As of now (Sep 6, 2013) it may not be complete.

T1: Implement clustering (ideally using EM or some other strong clustering algorithm) on Atoms

T2: Create Clustering MindAgent that intermittently does clustering of various sorts (e.g. based on SimilarityLink, based on spatiotemporal proximity, etc.)

T20: Create primitive version of NL Generation (or connect/resurrect prior NLGen version) to enable articulation in English of individual links and small combinations of links. (If NL development doesn't support doing this right, hack some rules for specific link types and small combinations thereof.)

T21: Implement basic dialogue control based on OpenPsi (so that OpenPsi can choose speech acts along with other acts). So, create GroundedSchemaNodes for a few types of speech actions...

T22: Decide on, and tune, a set of basic high-level goals for OpenPsi to use for these test scenarios. Test what happens as the weighting of these is changed, and the scenario is changed.

T23: Complete the NL comprehension pipeline, to get basic English sentences into the Atomspace in PLN-friendly form. Automatically ground certain words to objects in the game world (perhaps), e.g. block, walk, move.

Some Other Tasks (Nice-To-Have)

Here are things that would be helpful for the game scenarios described above, but aren't highly critical

U1: Implement RCC-3D in PLN after the pattern of IA, including uncertain truth value formulas

U2: Configure/connect MOSES to solve supervised learning problems triggered by ConceptNodes (here is a ConceptNode, now learn PredicateNodes distinguishing members of the Concept from other things)

U3: Configure/connect MOSES to learn dynamic action-plans in the game world, aimed at achieving goals. This probably requires creation of an "imaginary world" (based on the headless Unity world) for MOSES to use to test hypothetical dynamic action-plans, as previously discussed...