UAI 2005 Conference Review Session

Voting is a very general method of preference aggregation. A voting rule
takes as input every voter's vote (typically, a ranking of the
alternatives), and produces as output either just the winning
alternative or an aggregate ranking of the alternatives. One potential
view of voting is the following. There exists a ``correct'' outcome
(winner/ranking), and each voter's vote corresponds to a noisy
perception of this correct outcome. If we are given the noise model,
then for any vector of votes, we can compute the maximum likelihood
estimate of the correct outcome. This maximum likelihood estimate
constitutes a voting rule. In this paper, we ask the following
question: For which common voting rules does there exist a noise model
such that the rule is the maximum likelihood estimate for that noise
model? We require that the votes are drawn independently given the
correct outcome (we show that without this restriction, all voting
rules have the property). We study the question both for the case where
outcomes are winners and for the case where outcomes are rankings. In
either case, only some of the common voting rules have the property.
Moreover, the sets of rules that satisfy the property are incomparable
between the two cases (satisfying the property in the one case does not
imply satisfying it in the other case).

In their paper, an algorithm for minimizing the difference between two
submodular functions is presented, which is based on the concave-convex
procedure. Because several commonly used metrics in machine learning
(e.g. mutual information) are submodular, the problem of minimizing the
difference of two submodular problems arises naturally in many machine
learning applications. Two such applications are learning
discriminatively structured graphical models and feature selection
under computational complexity constraints. The authors also present
results on synthetic data indicating that classifiers based on
discriminative graphical models learned using this algorithm can
significantly outperform classifiers based on generative graphical
models.