Similar questions have been asked elsewhere, but I think this is sufficiently different to warrant a new post. I have a particular matrix $A$ and would like to know when the system $Ax = 0$ has at least one non-negative solution (other than $\vec{0}$). The problem is underdetermined: in most cases I expect the number of variables to be of the order $m^2$, where $m$ is the number of equations. Furthermore, each column of the matrix sums to $0$ and every equation has a mix of positive and negative coefficients. Is this a sufficient condition for the existence of a non-negative solution?

I have seen the algorithm of http://www.jstor.org/pss/1968384, which can be used to test whether a particular system of equations has a non-negative solution, but have not been able to use it to derive a proof for a general family of matrices.

where $\mathbb{R}_+$ denotes nonnegative reals.
(Like Farkas's Lemma, this is a "Theorem of Alternatives"; furthermore, it can be proved from Farkas's lemma.)

A nice way to prove this is, as in Theorem 2.2.6 of Borwein & Lewis's text "Convex Analysis and Nonlinear Optimization", to consider the related optimization problem
$$
\inf_y \quad\underbrace{\ln\left(\sum_{i=1}^m \exp(y^\top A \textbf e_i)\right)}_{f(y)};
$$
as stated in that theorem, $f(y)$ is unbounded below iff there exists $y$ so that $A^\top y > 0$. As such, this also gives an unconstrained optimization problem you can plug into your favorite solver to determine which of the two scenarios you are in. Alternatively, you can explicitly solve for either the primal variables $x$ or the dual variables $y$ by considering a similar max entropy problem (i.e.
$\inf_y\sum_i \exp(y^\top A\textbf{e}_i)$, which approaches 0 iff the desired $y$ exists) or its dual (you can find this in the above book, as well as papers by the same authors).

Anyway, considering Gordan's theorem, your condition on the columns (which can be written $\textbf{1}^\top A = 0$) has no relationship to the question at hand. In one of your comments you mentioned wanting to generate these matrices. To pick positive examples, fix a satisfying $x$, and construct rows $b_i'$ by first getting some $b_i$ and setting $b_i' := b_i - (x^\top b_i)x / (x^\top x)$; to pick negative examples, by Gordan's theorem, choose some nonzero $y$, and then consider adding to $A$ a column $a_i$, including it if it satisfies $a_i^\top y > 0$.

A homogeneous linear system does not have a nontrivial nonnegative solution if (and only if) some linear combination of the equaltions yields a nontrivial equation with nonnegative coefficients. Nothing in your assumptions prevents, for example, that the sum of the first two equations is $x_1+x_2+\dots+x_n=0$. Then obviously there is no nonnegative solutions.

Thanks. I'm aware of Farkas' condition for the existence of a non-negative solution. Unfortunately, applying Farkas to my problem leads me into a circular argument (to do with the origin of the equations). Are there any other methods to determine whether a non-negative solution exists?
–
bandiniMar 30 '11 at 20:13

You need to say more about your problem. From what you have said the matrix might have $m=4$ rows involving $m^2=16$ variables as follows: Each equation has 8 positive and 8 negative coefficients. The first two equations sum to give an equation with 16 positive coefficients (So that Farkas condition shows that every non-zero solution has both positive and negative entries) and the third and fourth equations are exactly the negatives of the first and second equations. One can of course do this with each row of the form $aaaaaaaabbbbbbbb$ but one could make small perturbations so that it was not so blatant.

Thanks. I'm able to solve this linear program for any particular instance. However, for my problem I want to prove that every matrix, generated in a certain way, is guaranteed to give rise to a system of linear equations with a non-negative solution.
–
bandiniMar 30 '11 at 21:12