Glossary

Integration of predictions, examples or models from finer granularity to coarser granularity. Usually applied through a hierarchy. See also disaggregation.

Background knowledge

Information, in terms of knowledge, bias or parameters, which is known before training begins. It can also change or increase after the model has been trained. See: “theory revision” and “operating context”.

Calibration

Process in which the expected value or probability of a prediction matches the actual value or the average of a broader (more global) population.

Concept shift (or drift)

A kind of dataset shift that appears in X --> Y or Y --> X problems, where the conditional probabilities p(y|x) or p(x|y) changes (respectively) while the probability p(x) and p(y) (respectively) are maintained.

Context

See “operating context”.

Covariate shift

A kind of dataset shift that appears in X --> Y problems, where the conditional probabilities p(y|x) are preserved but the feature space probability p(x) changes.

D2K (Data to knowledge) process

The knowledge discovery process from data to actual knowledge that can be further applied to new situations.

Datamart

Part of a data warehouse which covers part of an organisation. It is usually represented by a multi-dimensional model.

Dataset shift

Appears when training and test (or deployment) joint distributions are different. According to Moreno-Torres et al. 2012, there are several kinds of dataset shift, including covariate shift, prior probability shift, concept shift or a combination of the former.

Data warehouse

A special kind of database for analytical purposes. Usually composed of several data warehouses.

Deployment

Stage in the D2K where models are applied to new data.

Dimensional

Referring to one aspect of a problem or the data, such as time, location, etc. See “multi-dimensional”.

Disaggregation.

Opposite process to “aggregation” where predictions, examples or models are disintegrated into finer granules.

Discretisation

A way in which a continuous value (usually a feature) is transformed into a discrete one. See “quantisation”.

Enrichment

The transformation of a non-versatile specialised model into a more general versatile model.

Feature

An attribute which describes a problem instance: an attribute, a variable.

Granules

Term to refer to the degree of granularity: from coarse to thick granularity.

Hierarchical data

Data which is organised at different levels or granules. A hierarchy is usually a directed graph. Dimensions are usually hierarchical.

Model

Descriptive or predictive knowledge, generally learned from data, which can be applied or deployed to new data.

Multi-dimensional

Referring to problems or data with many facets or dimensions. See “dimensional”.

Multi-dimensional model

Database model (common in data warehousing) which represents information in terms of a central entity with indicators and a set of dimensions following hierarchies.

Multi-relational

Referring to problems where data is expressed in terms of multiple relations or predicates, as in first-order logic or in relational databases.

Operating condition

A slightly more restrictive term for operating context. It is the original term that is used in ROC analysis, and that we further generalise in this project under the term operating context. Nonetheless, the term operating condition is somewhat general in science and engineering, referring to the set of variables or factors that may affect a given measurement or the operation of a system or device (see, e.g., OC1OC2OC3). Also, the use of its plural, as in (standard) "operating conditions" of a system, refers to the set of operating conditions where a device works properly or under a given margin of error or cost. On occasions, if the condition is a well-known magnitude, we can use the name of the magnitude instead, as in "operating temperature"

Operating context

The set of characteristics and knowledge about where the model is learnt or applied.

"Parametrised" operating context

A parametrised view of an operating context as set of parameters that determine the optimality of the application of a model, such as a cost function, class imbalance, etc.

Prior probability shift

A kind of dataset shift that appears in Y --> X problems, where the conditional probabilities p(x|y) are preserved but the output space probability p(y) changes.

Quantification

A task in machine learning which aggregates several individual predictions in order to get an overall indicator, such as the class prevalence (or estimated class probability), an average, etc. A special kind of “aggregation”, applied to model predictions.

Quantisation

An alternative to discretisation in which a continuous value (usually a feature) is transformed into a discrete one. See “discretisation”.

Reframing

The process of adapting a model to perform well on a range of operating contexts. This is usually performed by the use of a more versatile model, which is not discarded after each application.

ROC analysis

A set of tools to plot classifiers graphically and to evaluate their performance.

Theory revision

A set of tools to modify a model or theory because it has become inconsistent or insufficient with new evidence.

Training

Refers to the stage where models are learnt from data.

Transfer learning (inductive transfer)

A general area in machine learning where knowledge from one problem is used to solve a different problem.

Versatile model

A model that is more general than just needed for a single context in such a way that can more be easily adapted (or parameterised) to other contexts.