ask the user for an example
default is to use a new positive example from previous search
if user responds with Ctrl-d (eof) then search stops
if user responds with "ok" then default is used
otherwise user has to provide an example

construct bottom clause using that example
expects to have appropriate mode declarations

search for the best clause C

ask the user about C who can respond with

ok: clause added to theory

prune: statement added to prevent future
clauses that are subsumed by C

does theory-level search
currently only with search = rls; and evalfn = accuracy
induce entire theories from batch data
using a randomised local search
currently, this can use either: simulated annealing with a fixed temp,
GSAT, or a WSAT-like algorithm
the choice of these is specified by the parameter: rls_type
all methods employ random multiple restarts
and a limit on the number of moves
the number of restarts is specified by aleph_set(tries,...)
the number of moves is specified by aleph_set(moves,...)
annealing currently restricted to using a fixed temperature
the temperature is specified by aleph_set(temperature,...)
the fixed temp. makes it equivalent to the Metropolis alg.
WSAT requires a random-walk probability''
the walk probability is specified by aleph_set(walk,...)
a walk probability of 0 is equivalent to doing standard GSAT
theory accuracy is the evaluation function

search for logical constraints that
hold in the background knowledge
A constraint is a clause of the form aleph_false:-...
This is modelled on the Claudien program developed by
L. De Raedt and his colleagues in Leuven
Constraints that are nearly true'' can be obtained
by altering the noise setting
All constraints found are stored as `good clauses'.

search for interesting boolean features
each good clause found in a search constitutes a new boolean feature
the maximum number of features is controlled by aleph_set(max_features,F)
the features are constructed by doing the following:
while (number of features =< F) do:

construct a theory using recursive partitioning.
rules are obtained by building a tree
the tree constructed can be one of 4 types
classification, regression, class_probability or model
the type is set by aleph_set(tree_type,...)
In addition, the following parameters are relevant

aleph_set(classes,ListofClasses): when tree_type is classification or
or class_probability

aleph_set(prune_tree,Flag): for pruning rules from a tree

aleph_set(confidence,C): for pruning of rules as described by
J R Quinlan in the C4.5 book

aleph_set(lookahead,L): lookahead for the refinement operator to avoid
local zero-gain literals

aleph_set(dependent,A): argument of the dependent variable in the examples

The basic procedure attempts to construct a tree to predict the dependent
variable in the examples. Note that the mode declarations must specify the
variable as an output argument. Paths from root to leaf constitute clauses.
Tree-construction is viewed as a refinement operation: any leaf can currently
be refined by extending the corresponding clause. The extension is done using
Aleph's automatic refinement operator that extends clauses within the mode
language. A lookahead option allows additions to include several literals.
Classification problems currently use entropy gain to measure worth of additions.
Regression and model trees use reduction in standard deviation to measure
worth of additions. This is not quite correct for the latter.
Pruning for classification is done on the final set of clauses from the tree.
The technique used here is the reduced-error pruning method.
For classification trees, this is identical to the one proposed by
Quinlan in C4.5: Programs for Machine Learning, Morgan Kauffmann.
For regression and model trees, this is done by using a pessimistic estimate
of the sample standard deviation. This assumes normality of observed values
in a leaf. This method and others have been studied by L. Torgo in
"A Comparative Study of Reliable Error Estimators for Pruning Regression
Trees"
Following work by F Provost and P Domingos, pruning is not employed
for class probability prediction.
Currently no pruning is performed for model trees.

Pred V is of the form N/A, where the atom N is the name of the predicate, and A its arity.
Specifies that outputs and constants for literals with symbol N/A are to be evaluated
lazily during the search. This is particularly useful if the constants required
cannot be obtained from the bottom clause constructed by using a single example.
During the search, the literal is called with a list containing a pair of lists for each
input argument representing `positive' and `negative' substitutions obtained
for the input arguments of the literal. These substitutions are obtained by executing
the partial clause without this literal on the positive and negative examples.
The user needs to provide a definition capable of processing a call with a list of
list-pairs in each argument, and how the outputs are to be computed from such information.
For further details see A. Srinivasan and R. Camacho, Experiments in numerical reasoning with
ILP, Jnl. Logic Programming.

Predis of the form N/A, where the atom N is the name of the predicate, and A its arity.
Specifies that predicate N/A will be used to construct and execute models
in the leaves of model trees. This automatically results in predicate N/A being
lazily evaluated (see lazy_evaluate/1).

Pred is of the form N/A, where the atom N is the name of the predicate,
and A its arity. States that only positive substitutions are required
during lazy evaluation of literals with symbol N/A.
This saves some theorem-proving effort.

Head is the head of the current hypothesised clause.
Body is the body of the current hypothesised clause.
Label is the list [P,N,L] where P is the positive examples covered by the
hypothesised clause, N is the negative examples covered by the
hypothesised clause, and L is the number of literals in the
hypothesised clause.

Head is the head of the current hypothesised clause.
Body is the body of the current hypothesised clause.
Label is the list [P,N,L] where P is the positive examples covered by the
hypothesised clause, N is the negative examples covered by the
hypothesised clause, and L is the number of literals in the
hypothesised clause. Module is the module of the input file.
Internal predicates.