Abstract

Attackers may seek to manipulate recommender systems in
order to promote or suppress certain items. Existing defenses based on analysis
of ratings also discard useful information from honest raters. In this paper,
we show that this is unavoidable and provide a lower bound on how much
information must be discarded. We use an information-theoretic framework to
exhibit a fundamental tradeoff between manipulation-resistance and optimal use
of genuine ratings in recommender systems. We define a recommender system to be
(n, c)-robust if an attacker with n sybil
identities cannot cause more than a limited amount c units of damage to
predictions. We prove that any robust recommender system must also discard Ω(log (n/c)) units of useful information from each
genuine rater.