Aggregation functions are most often studied from the perspective
of algebra, functional equations, and calculus. Yet, as the theory
of data fusion stems from practical applications and practitioners'
demands, there is also a need to treat them using various
algorithms.
The first part of my talk concerns the issue of fitting "best" --
with respect to some loss functions -- aggregation functions (from
some predefined classes, like weighted arithmetic means or weighted
quasi-arithmetic means) to empirical data. We will see that this
issue is similar to, but more complex than, regression analysis.
It is worth noting that classically it is assumed that the items we
aggregate are just sequences of real numbers. Thus, in the second
part of the talk I will focus on aggregation of non-standard data
types, like character strings (like DNA sequences), points in R^d
for d>1 (often studied in the field of computational geometry), or
numeric vectors of varying lengths (which may be encountered when
one tries to evaluate the performance of scientists). Such data
fusion methods are useful, e.g., in data clustering or exploratory
data analysis. It turns out that aggregation methods of more
complex data types can hardly be expressed using closed-form
mathematical expressions and thus they require the use of elaborate
algorithms.