I'm not a mathematician but I working on a problem that feels like it an example of a more general kind of problem and I'm hoping that someone might be able to point me in the right direction.

The problem is trying to find a convex combination of different ways of ranking n items that satisfy certain constraints. Let me be more concrete: Suppose we have $n$ individuals with endowments $b_1 > b_2 > b_3 \ldots b_n$. There are positions $1 \dots n$, each of which gives a value of $v_1 > v_2 > v_3 \ldots v_n \ge 0$ that the individuals can be assigned to.

I want to create a non-deterministic algorithm for assigning individuals to positions that is in some sense "equitable." so that for each individual, their expected proportional value from their positions is equal to their proportional endowment. E.g., suppose we have some method that assigns individual $i$ to position with 1 with probability $p_{i1}$, to position 2 with probability $p_{i2}$ and so on, I want it so that the algorithm generates an allocation such that:

Some observations / thoughts:

For this to work even in the n=2 case, we need to constrain $b_1$ to that it isn't proportionally larger than $v_1$ (otherwise even always placing individual 1 at position 1 wouldn't be enough).

I originally thought this could be framed as a linear programming problem, where the goal is to find weights for each of the $n!$ possible orderings. Maybe this would work, but it would be computationally infeasible.

A particularly nice approach might be one that sequentially assigns positions by having each remaining individual "buy" probability shares with their budget and then draw a winner. Unfortunately, this doesn't have the equitable property above, but I was thinking that perhaps if we thought of the allocation as happening repeatedly, we could give the "losers" (payoff from position too small) a bigger endowment, taken from the "winners" in such a way that expected payoff converges to the equitable outcome.

Anyway, thanks for reading this far and I appreciate any comments, suggestions, answers etc.

Thanks Will (very helpful reference) - but wouldn't these methods just give me a single, deterministic mapping? I think I need a non-deterministic procedure that generates a distribution of assignments, since any any particular assignment will be "wrong" in the sense that the pay-offs will just reflect the associated $v$'s of the single assignment, which doesn't satisfy the equity criterion.
–
John HortonJan 4 '12 at 4:26

3 Answers
3

I originally thought this could be framed as a linear programming problem, where the goal is to find weights for each of the n! possible orderings. Maybe this would work, but it would be computationally infeasible.

Linear programs with this many variables are solved all the time using column generation:

In your problem, let variable $q_k$ denote the probability of using permutation $\pi_k$, and let variable $p_{ij}$ denote the marginal probability that agent $i$ is assigned to $j$, so you have a total of $n! + n^2$ variables. Let $\pi_k(i,j) = 1$ if agent $i$ is assigned to position $j$ in permutation $k$ and zero otherwise. We can relate the $p_{ij}$'s and $q_k$'s via $n^2$ constraints of the form

$\sum_k q_k \pi_k(i,j) = p_{ij} \forall i,j$

Now, assume as Kevin did that $\sum_i v_i = \sum_j b_j = 1$. Your equity criterion is just $\sum_j p_{ij} v_j = b_i \forall i$. We obviously must require that $q_k \geq 0$ for all $k$. Thus the feasible set of these permutations can be written as:

$\sum_k q_k = 1$

$p_{ij} = \sum_k q_k \pi_k (i,j) \forall i,j$

$q_k \geq 0 \forall k$

$\sum_j p_{ij} v_j = b_i \forall i$

There are a total of $1 + n^2 + n$ equality constraints and $n!$ inequality constraints, and you have $n! + n^2$ variables. Any basic feasible solution of this set (i.e. a corner point) therefore must have $n! - (n+1)$ of the inequality constraints active, which means that the corner points have only $n+1$ nonzero $q_k$'s. Thus, if your problem is feasible at all for particular choices of $v$ and $b$, it's possible to find such a set of assignments using at most $n+1$ possible permutations.

You may gain further insight by looking at the dual of this LP -- the primal problem above has $n^2 + n!$ variables, $1 + n^2 + n$ equality constraints, and $n!$ inequality constraints, so its dual will have $n^2 + n!$ constraints, but only $1 + n^2 + n$ variables. Those constraints should have a nice structure to them (something like, one constraint for each permutation, and that constraint sums over all nonzero $\pi_k(i,j)$'s or something), which would give you a polynomial-time separation oracle for solving the dual LP. There may likely be a way to recover a primal solution from the dual using complementary slackness.

It seems to me you should be able to work out a formula for $p_{ij}$ explicitly, by solving that system of linear equations you wrote down. More to the point, you can do this before deciding on what algorithm you're going to use to assign the ranks.

For this solution to correspond to a real-world solution to your problem, it seems to me that the matrix $P = (p_{ij})$ ought to be doubly stochastic (its rows and columns should sum to 1), because everyone should get a rank, and every rank should get a person. If this doesn't happen, you're out of luck.

Once you've done this, your doubly stochastic matrix $P$ can be expressed as a convex combination of permutation matrices (this is the Birkhoff-Von Neumann theorem). Each of these permutation matrices corresponds to a rank assignment.

You should be able to come up with your algorithm, then, by finding a constructive proof of Birkhoff-Von Neumann and realizing it with code. I sort of doubt that this would be efficient without further cleverness, but it might be a place to start.

Except for very small n, the system of equations for the p_{ij} is underdetermined, even if you add the conditions that they form a doubly stochastic matrix. So there are typically going to be lots of solutions, some of which might be easier to implement than others.
–
Hugh ThomasJan 4 '12 at 19:00

Ok, the more I think about it, the dumber I realize my answer was. Sigh. However, sometimes grossly underdetermined problems have sparse solutions. Maybe you could try using only those permutations which are products of $k$ adjacent transpositions, for some small $k$. You could just try $k=2, 3, 4, ...$ and stop when you find a solution. The double stochasticity conditions are linear equations, so this is still just linear algebra.
–
Benjamin YoungJan 4 '12 at 20:53

Our algorithm will proceed in at most $n$ stages. At each stage we will assume that we have a sorted list $b_1 \geq b_2 \geq \dots \geq b_n$ with $\sum b_i=1$.

Consider the assignment scheme where we assign endowments according to this order, breaking ties randomly (so for example, if $b_1=b_2=b_3>b_4$, then individuals $1$ $2$ and $3$ will receive positions $1$, $2$, and $3$ uniformly from the $6$ possible orders). Under this scheme, each individual will receive an expected valuation $y_i$ (By construction the $y_i$ are also sorted, with ties for individuals with equal endowments).

Let $p \leq 1$ be chosen maximal subject to the constraint that for every $i$ we have

$$b_i - p y_i \geq b_{i+1} - p y_{i+1}$$

We follow this assignment scheme with probability $p$. If we do not follow this assignment scheme, we replace each $b_i$ with $b_i-py_i$ and rescale so that the sum of the $b_i$ remains equal to $1$. At each stage one of two things will happen.

-$p$ is strictly less than $1$. In this case the new $b_i$ correspond to the target expected value, given that the algorithm hasn't already chosen an assignment. Furthermore, the number of distinct values of $b$ is reduced by at least $1$. So we can't be in this state indefinitely.

-$p=1$. If this happens because the $b_j$ and the $y_j$ are exactly equal, we are done and have produced the desired assignment probabilities. If this is not the case then (because the $b_i-y_i$ are sorted and sum to $0$), we must have that there is some $i$ such that at this stage we have $b_1 = b_2 = \dots = b_i>b_{i+1}$ and $y_1=y_2=\dots=y_i < b_1$. This means there's no satisfying assignment at our current stage, since the average of the $i$ largest $b$ is strictly larger than the average of the $i$ largest $v$.

But at every stage of our algorithm we assigned as much value as possible to the $b_i$ (since our sorting never changed and $b_i$ was always strictly larger than $b_{i+1}$, we've always assigned the $i$ most valuable positions to the $i$ largest endowments). So the original problem must also have had an average of the $i$ largest $b$ which was too large to be satisfiable.

Edit: One corollary of this algorithm is that an assignment is possible iff $\sum_{j=1}^k b_j \leq \sum_{j=1}^k v_j$ for every $k$. If this doesn't hold, the algorithm might return an assignment before you reach the $p=1$ stage, but the assignment won't have the desired expected values.