SUMMARY:
Most social choice algorithms rely on private data from individuals to
maximize a social utility. However, many algorithms are manipulable – an
individual can manipulate her reported data to obtain an outcome she
prefers often at the cost of social good. Literature addresses this by
declaring an algorithm as “manipulable” or “strategy-proof”. However, for
many decision we are forced to either use a manipulable algorithm or an
algorithm with negative properties; for instance, a dictatorship is the
only reasonably strategy-proof way to decide an election. Thus, we view it
as unwise to take an all-or-nothing approach to manipulation since we often
have to settle for nothing.
In this dissertation, we focus on algorithmic design with strategic
behavior in mind. Specifically we develop the framework to examine the
effect of manipulation on social choice using a game theoretic model. Our
model of human behavior is supported by an extensive amount of experimental
evidence and psychology and economics and allows us to better understand
the likely outcomes of strategic behavior. We introduce a measure of
manipulation – the Price of Deception – that quantifies the impact of
manipulation. With the Price of Deception we are able to identify
algorithms are negligibly impacted by manipulation, algorithms where
strategic behavior leads to arbitrarily poor outcomes, and anything
in-between. We prove the power of our model and the Price of Deception by
answering open problems in assignments, facility location, election and
stable marriage including a 28-year open problem by Gusfield and Irving.
Our results demonstrate that the Price of Deception, like computational
complexity, provides finer distinctions of manipulability than between
“yes” and “no”.