Computational Social Choice: Autumn 2011

Social choice theory is the study of mechanisms for collective decision making, such as election systems or protocols for fair division, and computational social choice addresses problems at the interface of social choice theory with computer science.
This course will introduce some of the fundamental concepts in social choice theory and
related disciplines, and expose students to current research at the interface of social
choice theory with logic and computation. This is an advanced course
in the Master of Logic programme.
Students from AI, Mathematics, Economics, Philosophy, etc. are also very welcome to attend
(contact me if you are unsure about your qualifications).

This page will be updated throughout the semester, with links to the slides used in class,
homework assignments, and information on possible changes in the schedule.
Please check it regularly.

Focus 2011:
For this edition of the course, I plan to specifically focus on topics at the interface of logic and social choice, as outlined in this paper, including in particular the axiomatic method in social choice theory, preference modelling and social choice in combinatorial domains, and judgment aggregation. Other topics in computational social choice will be addressed as time permits; interested students will be able to further explore those later on during (individual) projects with members of the COMSOC Group.

Prerequisites:
The only prerequisite for this course is "general mathematical maturity": you will be asked to prove things. For a couple of the lectures I will assume some basic background in complexity theory (up to the notion of NP-completeness); if you do not yet have this background, then it will be possible to learn the necessary things on the side, with a bit of extra work. Finally, it makes sense to take this course on top of courses on game theory and other courses touching upon concepts from economic theory, but this is certainly not a requirement.

Week

Date

Content

Homework

1

7 September 2011

Introduction (4up).
The first part of this introductory lecture has been about illustrating some of the problems we are going to address in the course, mostly by way of example. To find out more about the field, without worrying about the details for now, you could have a look through the Short Introduction:

You could also have a look at the COMSOC Website, for information on the COMSOC workshop series, for a list of PhD theses in the field, and for a mailing list carrying event and job announcements.

In the second part of the lecture we have seen a first proof of Arrow's famous impossibility theorem. You can look up the details in this paper, which will also serve as the main reference for large parts of the remainder of the course:

Impossibility Theorems (4up).
In this lecture on impossibility theorems and the axiomatic method we have revisited Arrow's Theorem and we have seen two further examples for classical results in the field, namely Sen's Theorem on the Impossibility of a Paretian Liberal and the Muller-Satterthwaite Theorem. Along the way we have seen several intuitively appealing axioms and their formalisation, we have seen different proof methods, and we have worked in two different formal frameworks, namely social welfare functions and social choice functions. Here are the main references for the material covered:

Characterisation Results (4up).
In this lecture we have seen three different approaches to the characterisation voting rules. The first approach is the axiomatic method, which we have already used to derive impossibility theorems. Here the aim is to identify a set of axioms (i.e., properties) that uniquely identify a voting rule (or possibly a family of voting rules). As examples, we have seen May's characterisation of the plurality rule and Young's characterisation of the positional scoring rules. The second approach is based on, first, a notion of consensus (specifying a set of ballot profiles in which there is a clear winner) and, second, a notion of distance between profiles. The winners of an election are then the winners in the consensus profiles that are closest to the actual profile. We have seen two voting rules that are most conveniently defined in this framework (Dodgson and Kemeny) and we have seen two others for which it is possibly to "rationalise" them in terms of consensus and distance (Borda and plurality). The third approach, finally, is based on the idea that voting can be a mechanism for tracking the truth: we assume that there is an objectively correct choice to be made and each voter receives a distorted signal regarding this choice; the voting procedure then has to estimate the most likely correct choice given the observed ballots provided by the voters. The classical Condorcet Jury Theorem fits into this framework, and we have also seen how to characterise the Borda rule by choosing an appropriate "noise model". Here are two representative papers each for each of the three approaches:

Strategic Manipulation (4up).
We have introduced the problem of strategic manipulation: sometimes a voter can improve the outcome of an election for herself by misrepresenting her preferences. While in earlier lectures we had treated voting rules merely as mechanisms that map a profile of ballots supplied by the voters to election winners, the notions of truthful voting and strategic manipulation provide the missing connection between actual preferences and ballots cast in an election. A voting rule is called strategy-proof if it never gives any voter an incentive to not vote truthfully. The main theorem in this area of social choice theory is the Gibbard-Satterthwaite Theorem, which shows that no resolute voting rule (that does not exclude some alternatives from winning to begin with) can be strategy-proof without also being dictatorial. We have proved the theorem to be a simple corollary to the Muller-Satterthwaite Theorem, by showing that strategy-proofness entails strong monotonicity. You can read up on the proof here:

We have then looked into ways of trying to circumvent the problem raised by the Gibbard-Satterthwaite Theorem and discussed two appproaches in detail. The first approach is to limit the range of profiles on which we expect our voting procedure to perform well. The best known such domain restriction is single-peakedness, which is a reasonable assumption in some cases and which constitutes a sufficient condition for the existence of a strategy-proof voting rule. The second approach is one of the, by now, classical ideas in computational social choice: look for voting rules that, while not immune to manipulation, are resistant to manipulation in the sense that manipulation is computationally intractable for the manipulators to perform. We have seen that several standard voting rules do not enjoy this property, but STV does. We have also discussed the coalitional manipulation problem, in which computational intractability can often be achieved even for elections with small numbers of candidates (we have seen this for the Borda rule). Here are a few of the papers that were cited in class:

Ranking Sets of Objects (4up).
This has been a short introduction to a small subfield in social choice theory addressing the problem of devising principles for how to extend a preference order declared over individual objects to a preference order declared over nonempty sets of those projects. Such principles are needed, for instance, to reason about the incentives of a voter to manipulate in the context of an election that might yield a set of winners and to reason (in a non-probabilistic fashion) about uncertain effects of actions. We have covered some of the fundamental axioms formulated for this problem, we have proved the Kannai-Peleg Theorem, the seminal impossibility theorem in the field, and we have briefly discussed the use of tools from automated reasoning to automatically derive results of this type. Here are the main papers cited in class:

Logics for Social Choice (4up).
If we were able to fully formalise parts of social choice theory by embedding it into a logic with a well-defined formal semantics and proof theory, then this might open up the way to being able to automatically prove relevant facts about social choice mechanisms. Also aside from such practical considerations, it is interesting to understand what level of expressivity is required from a logic to be able to encode a particular axiom (the point being that axioms requiring less expressivity might be deemed more natural). This lecture has been an introduction to this still under-developed part of computational social choice. We have focussed on attempts to embed the Arrovian framework of social welfare functions into a suitable logic, including propositional logic, modal logic, and first-order logic. Here are the main papers discussed in class:

Preference Representation (4up).
So far, we have always taken preferences to be linear orders on the set of alternatives, and we have not worried about the explicit representation of preferences using a formal language. This lecture has looked into both of these matters more deeply. First, we have seen that other types of preorders, as well as interval orders, may be attractive choices for modelling preference orders, and we have also discussed the use of utility function as a means of modelling preferences. Second, we have reviewed a number of preference representation languages for sets of alternatives that have a combinatorial structure, notably CP-nets and two logic-based languages, namely prioritised goals, and weighted goals. We have then exemplified some of the problems studied in the field of preference representation for the case of weighted goals, and compared several instances of this family of languages in terms of their expressive power, their relative succinctness, and the complexity of basic reasoning tasks they induce. To learn more, consult the following papers:

Meet me no later than the end of this week to discuss plans for your paper.

10

9 November 2011

Voting in Combinatorial Domains (4up).
When the decision to be made by a group of voters can be described in terms of a range of possible assignments to each of a set of variables, then we are facing a problem of voting in combinatorial domains. In fact, most collective decision making problems faced in practice will have this kind of combinatorial structure. In this lecture we have reviewed several possible approaches to dealing with this kind of problems and we have remarked that none of them currently provides a fully satisfactory solution. To date, the most promising approaches are distance-based procedures, sequential voting, and voting with compactly expressed preferences. For an overview of the field, consult these reviews:

Judgment Aggregation (4up).
This has been an introduction to the theory of judgement aggregation, which studies the problem of aggregating a set of individual judgements on the acceptability of a range of logically related propositions into a single collective set of judgements. We have seen a basic impossibility result, showing that no judgement aggregation procedure can meet a small number of seemingly reasonable requirements and than explored ways of circumventing this impossibility by weakening some of these requirements. This perspective has naturally led to the discussion of several concrete aggregation procedures. Finally, we have seen how the standard model of preference aggregation can be embedded into the framework of judgment aggregation. You can read up on the material covered in these expository reviews:

Advanced Judgment Aggregation (4up).
This has been a second lecture on judgment aggregation in which we have discussed axiomatic characterisations of the quota rules and the majority rule; agenda characterisation theorems talking about either the possibility of devising an aggregation procedure that is consistent for a given class of agendas and that meets certain axiomatic requirements or the safety of the agenda problem, which asks whether all of the procedures meeting a given number of axioms will be consistent for a given class of agendas; and the computational complexity of the safety of the agenda problem. Some of this material can be found the following two papers:

Guest lecture by Umberto Grandi on Belief Merging.
This has been an introduction to belief merging, including a discussion of connections to both belief revision and judgment aggregation. You can find more information in this paper:

Fair Division (4up). This has been an introduction to fair division. We have defined a number of criteria for assessing the fairness and efficiency of an allocation of goods, we have briefly discussed methods for solving the combinatorial optimisation problem of allocating indivisible goods, and we have discussed cake-cutting algorithms (i.e., methods for allocating portions of a single divisible good) in somewhat more detail. You can find more information and bibliographic references in my lecture notes on fair division.

CLASS CANCELLED (instead we will organise the student presentations next week)

16

19 December 2011

The student presentations will take place on Monday 19 December 2011 in room A1.04. Each speaker has 20 minutes, followed by questions. Please be on time (speakers should arrive 15 minutes in advance of their session). Guests are welcome. Here is the schedule (the papers on which these talks are based can be found at the bottom of this page):

11:00

Giovanni

11:30

Kyndylan

12:00

David

13:30

Pawel

14:00

Apostolos

14:30

Julia

15:30

Andreea

16:00

Dominik

16:30

Riccardo

Submit your final paper no later than 31 December 2011 (by email).

Papers:
You will have to write a paper about a recent result from the literature
and present your findings in a talk at the end of the semester. The main aim of your paper
(and of your talk) should be to make an important recent result, related to the topics covered in
the course, accessible to your class mates (and to me). Apart from that, your paper should also
have some modest original content. This could, for instance, be a new proof of an existing result,
an improved presentation or formalisation of a particular model from the literature, pointing out
an unexpected connection between two different lines of work, or casting an economics-related result
in a fashion that is more accessible to "us" (people with a background in logic and/or AI).
In case your chosen paper is about classical (as opposed to computational) social choice, you may
also explore adding a computational perspective to that work.
Any of the following papers would be a good starting point (and I'd also be happy to hear your own proposals).
You should have settled on a topic by mid October. So do discuss your interests with me early on.

Reviews:
You will be asked to submit a short initial draft in early November, covering the definition
of the formal framework you are working with.
This early draft will be reviewed by two of your fellow students, in a similar fashion as a programme
committee member would review a submission to a conference.
I will not grade the drafts, but I will grade the reviews (giving a grade to the reviewer, not the author of the draft, obviously).

Formatting:
You paper should be written in Latex, using the IJCAI-2011 style. You initial draft should be 2 pages long. Your final paper should be in the order of 6 pages long, and definitely not longer than 8 pages (including bibliography, figures, appendices, etc.).