where the context is a property of the current inference, the action is an inference rule applied to a given premise (for backward chaining) or conclusion (for forward chaining) of the current inference tree, and the goal is the goodness of the produced inference.

expresses whether the inference tree being expanded, the node (premise or conclusion) from which it is expanded, and the rule used for expansion have some properties, or follow some patterns.

<good-produced-inference>

expresses whether the produced inference is good, according to some measure of goodness.

Learning Control Rules

Control rules can be handwritten, but most interestingly they can be learned by OpenCog. Currently the main way they are learned is via pattern mining by looking for conjunctive patterns such as

<inference-pattern> And <node-pattern> And <rule-pattern> And <good-produced-inference>

as well

<inference-pattern> And <node-pattern> And <rule-pattern> And Not <good-produced-inference>

then transformed to into control rules.

Aggregating Control Rules

In case multiple control rules apply to the same inference rule, which happens often when learned, they need to be aggregated in order to properly estimate the weight of that inference rule. For that Bayesian model averaging, in particular a specialized form of Solomonoff Operator Induction [1], is currently being used.

Decision

Once all rule weights have been estimated for the next rule selection, selection is performed with [Thompson Sampling] just like it normally would with statically defined rule weights.