The application is nearest-neghbour interpolation:
given values $z_j$ at sample points $X_j$, and a query point $P$,
one chooses the $N$ $X_j$ nearest to $P$ ($N$ fixed) and averages their $z_j$.
If $P$ is not in the convex hull of the $N$ $X_j$,
the interpolation will be one-sided, not so good.
I'd like to be able to say
"taking 6 neighbors in 2d, 10 in 3d, is seldom one-sided".

If anyone could point me to selfcontained pseudocode for the function Inhull( $N$ points )
(without calling full LP), that would be useful too.

When you say "[-1..1]", I take it that you mean the $n$-dimensional cube $[-1,1]^n$?
–
Greg KuperbergJul 23 '10 at 17:38

How non trivial do you want the estimate to be? It is pretty clear how to get some bounds. It's not clear to me how to get good ones. Example in dimension 2: Calculate the probability that each sector contains at least one point. It is then clear that their convex hull contains the origin.
–
HelgeJul 23 '10 at 17:41

Theorem. If $X_1$, ..., $X_N$ are i.i.d. random points in $R^d$ whose distribution
is symmetric with respect to $0$ and assigns measure zero to every hyperplane
through $0$, then
$$\mathbb P(0\notin \mbox{conv}\{X_1,\dots,X_N\})=\frac{1}{2^{N-1}}\sum\limits_{k=0}^{d-1}{N-1 \choose k}.$$

The proof is straightforward. Let $\mu$ be the distribution of $X_k$, and set
$$ f(x_1,\dots,x_N) = \begin{cases} 1, & \mbox{if } x_1,\dots,x_N\ \mbox{ lie in an open halfspace of $\mathbb R^d$ with $0$ in the boundary}, \newline 0, & \mbox{else.} \end{cases}$$
Then due to the invariance of $\mu$ under reflection in the origin, we have that
$$\mathbb P(0\notin \mbox{conv}\{X_1,\dots,X_N\})=\int_{\mathbb R^d}\dots \int_{\mathbb R^d} \frac{1}{2^N}\sum\limits_{\varepsilon_i=\pm1}f(\varepsilon_1x_1,\dots,\varepsilon_Nx_N)\ \mu(dx_1)\dots\mu(dx_N).$$
Now, the sum
$$C(N,d)=\sum\limits_{\varepsilon_i=\pm1}f(\varepsilon_1x_1,\dots,\varepsilon_Nx_N)$$
can be interpreted as the number of connected components of the set $\mathbb R^d\backslash (H_1\cup\dots\cup H_N)$ induced by the hyperplanes $H_1$, ..., $H_N$ through $0$ which are in general position. But there is a classical calculation going back to to Steiner and Schläfli, which shows that
$$C(N,d)= 2\sum\limits_{k=0}^{d-1}{N-1 \choose k}.$$

Actually, I think one can answer the thing quite easily. At least in dimension 2.

Step 1: Call $P_1(N)$ the probability that none of the points $x_1, \dots, x_N$ lies in the sector $(0,\infty)^2$. So we have that all points lie in $((0,\infty)^2)^c$, which has probability $\left(\frac{3}{4}\right)^N$.

Step 2: Call $P_2(N)$ the probability that each sector contains at least $1$ point. By independance this is just $P_2(N) = 4 \left(\frac{3}{4}\right)^N$.

So, we have that the probability that $0$ does not lie in the convex hull of $N$ points is $\leq 4 \left(\frac{3}{4}\right)^N$.

To get a similar bound in the other direction, observe that the probability that all points lie in one sector is just $(3/4)^N$ as mentioned above. So the two bounds are within constants. However, I believe that improving on the number $4$ above would be kind of tedious ...

something garbled here? In Step 2, you must mean the probability that NOT every sector contains at least 1 point? That's not precisely $4 (3/4)^N$ since there is no independence between the different sectors, but $4 (3/4)^N$ is indeed an upper bound. However, in your last paragraph, "the probability that all points lie in one sector" is $4(1/4)^N$ not $(3/4)^N$, isn't it?
–
James MartinJul 23 '10 at 18:34