Suppose I have the convex hull $P$ of a finite collection of points in $\mathbb{R}^d,$ and I want to see whether a point $p$ is contained in $P.$ This is a standard (some would say the standard linear programming problem: we are determining whether $p = \sum_{i=1}^{V(p)} \lambda_i v_i,$ with $\lambda_i \geq 0,$ $\sum_i \lambda_i = 1.$ (sum being over the vertices of $P.$

But now, suppose I want to know whether $p$ is strictly contained in $P.$ (that is, whether $p$ is in the interior of $P$). Of course, this already assumes that $P$ has nonempty interior, but let's suppose we have some reason to know that. How do we check that? One way is to maximize $\lambda_i$ for $i=1, \dots, V(P)$ -- all the maxima should be positive. This is rather inefficient ($V(P)$ could be large). Another approach is to shrink $P$ slightly (that is, find a point in the interior, say the barycenter of the vertices, call it the origin, shrink the vertices by $1-\epsilon$, etc). This works, except that it is not clear what the right value of $\epsilon$ is.

@RyanO'Donnell thanks. I thought I had made it clear in the question that the points were vertices of the convex hull (which means they were in g.p. as per wiki article), but there are subtle differences between strict and non-strict convexity, which is what I thought Dima might have been alluding to.
–
Igor RivinSep 18 '13 at 3:54

2 Answers
2

Take your linear program and add the objective function max $x$, and the inequalities $\lambda_i - x \geq 0$. If the point is on the exterior, the optimum solution has $x=0$. Otherwise, there is a solution with $x > 0$.

Although, since this procedure doubles the number of constraints, this is quite a considerable performance hit (much better than what I suggested in the question, but still...)
–
Igor RivinSep 18 '13 at 4:06

2

It needn't actually double the number of constraints; you can replace the inequality $\lambda_i \geq 0$ by $\lambda_i - x \geq 0$, to get the same number of constraints and one more variable. I expect you'll still take a performance hit, but less of one.
–
Peter ShorSep 18 '13 at 21:14

One can minimize $\sum_j \lambda_j^2$ subject to the constraints listed in the question. Then the minimizer has all entries non-zero if and only if $p$ is in the interior.
This is convex quadratic programming, can be done in polynomial time, solvers readily are available, if you actually need to solve these kinds of problems.

Even better is to use an interior point method to solve the linear programming problem of minimizing $\sum_i a_i\lambda_i$ for a generic $a$, subject to the constraints listed. Then it's not so hard to see (it's a property of these interior point methods) that the optimal λ will be positive iff $p$ is interior.

First question: why is it true that the entries are nonzero if and only if $p$ is in the interior? Second question: how much do you pay for quadratic vs linear programming (notice that the first scheme I propose is also polynomial time, just much slower than the single LP -- it is not obvious to me (but might well be true) that your QP is a win....
–
Igor RivinSep 17 '13 at 16:18

Also, how much precision do you need to know that all the $\lambda_i$ are positive in your QP solution? The inputs are rational, and here you are definitely out of the rational universe...
–
Igor RivinSep 17 '13 at 16:28

you solve approximately, via an interior point method. There are bounds known. Actually, even better, if you're to use an interior method anyway, is to minimize some generic linear function on $\lambda$. The trick is that these methods converge to the so-called analytic center of the optimal face, and it's not so hard to see that the optimal $\lambda$ will be positive iff $p$ is interior.
–
Dima PasechnikSep 18 '13 at 4:49

The problem with interior point methods is that the bounds are pretty bad (notice that here I want a one bit answer, so approximation does not mean much to me).
–
Igor RivinSep 18 '13 at 14:40

If you don't use an interior point method then you have to use a simplex method, which in theory (and often in practice) is slower...
–
Dima PasechnikSep 18 '13 at 14:42