This question is inspired by a recent talk by Matt Kahle on random geometric complexes.

Some simple notation: let $\mathcal{B} \subset \mathbb{R}^d$ be the unit ball in $d$-dimensional Euclidean space with the usual norm $\|\cdot\|$ and let $\mathcal{B}^k$ denote the $k$-fold product of this ball with itself for any positive integer $k$. For each $n \in \mathbb{N}$, define $K_n \subset \mathcal{B}^n$ as follows:

Thus, $K_n$ consists of those $n$-tuples of points in the unit ball whose diameter is bounded above by $1$. I would like to know what fraction of $\mathcal{B}^n$ lies in $K_n$ asymptotically as $n$ increases. More precisely, let $\mu$ be the usual $d$-dimensional Lebesgue measure, and define
$$\chi(n) = \frac{\mu(K_n)}{\mu(\mathcal{B})^n}.$$

How does $\chi(n)$ behave for fixed dimension $d$ as $n \to \infty$?

While an answer to this specific question would be great, I would be much more interested in some insight regarding how one should tackle such problems. The Calculus approach of setting up some integral fails horribly -- at least, I should confess that I tried $n = 2 = d$ and got hopelessly stuck with what appears to be an elliptic integral of the murderous kind -- so I expect that there is no closed formula for $\chi(n)$. But surely someone has worked out asymptotic envelopes for such a basic quantity!

@Vidit: Thanks for accepting my answer. I would also like to direct everyone's attention to Will Sawin's answer below which gives very sharp asymptotic bounds for $\chi(n)$.
–
Ricardo AndradeJun 19 '13 at 11:52

3 Answers
3

I am certainly not the best person to answer this question, as I do not have much insight to share regarding how to approach this kind of problems. My only (fairly obvious) suggestion is to estimate the relevant quantities in any way possible. In this process, it can be very helpful to reduce the calculations to lower dimensions, and the Fubini theorem will be our friend here.$\newcommand{\norm}[1]{\lVert #1 \rVert}$$\newcommand{\abs}[1]{\lvert #1 \rvert}$$\newcommand{\suchthat}{\ : \ }$$\newcommand{\set}[1]{\left\lbrace #1 \right\rbrace}$$\newcommand{\NN}{\mathbb{N}}$$\newcommand{\RR}{\mathbb{R}}$$\newcommand{\ball}[2][0]{{B_{#2}(#1)}}$$\newcommand{\unitball}{\ball{1}}$$\newcommand{\dd}{\:\mathrm{d}}$$\newcommand{\label}[1]{\rlap{\qquad\qquad \text{#1}}}$

For this problem, my single intuition is that when one of the points $p_i$ drifts away from the centre of the unit ball, then every other point $p_j$ for $j\neq i$ cannot access a certain portion of the unit ball which sits close to $-p_i \: /\norm{p_i}$. By adding sufficiently many points, this should make the ratio $\chi(n)$ go to zero as $n$ goes to infinity. I will try to make this intuition precise in the remainder of this answer.

Lemma: For each $x\in\unitball\setminus\ball{1/2}$, the inequality $\mu_{n-1}(P_x) \leq \mu\bigl(\unitball\cap\ball[1/2,0,\ldots,0]{1}\bigr)^{n-1}$ holds.

I will present a proof of the lemma at the end of this answer. We now apply the estimate in the lemma to the preceding integral expression for $\mu_n(Z_{n,1})$ to obtain:
$$ \mu(Z_{n,1}) \leq \mu\bigl( \unitball \bigr) \cdot \mu\bigl( \unitball\cap\ball[1/2,0,\ldots,0]{1} \bigr)^{n-1} $$
Putting this together with estimate (A):
$$ \mu_n(K_n) \leq \mu\bigl(\ball{1/2}\bigr)^n + n\cdot \mu\bigl( \unitball \bigr) \cdot \mu\bigl(\unitball\cap\ball[1/2,0,\ldots,0]{1}\bigr)^{n-1} $$
and further using the fact that $\ball{1/2} \subset \unitball\cap\ball[1/2,0,\ldots,0]{1}$, we simplify
$$ \mu_n(K_n) \leq (n+1) \cdot \mu\bigl( \unitball \bigr) \cdot \mu\bigl(\unitball\cap\ball[1/2,0,\ldots,0]{1}\bigr)^{n-1} $$
Consequently, we obtain an upper bound for $\chi(n)$:

where $0 \lt \rho \lt 1$. Note that $\rho$ depends only on $d$. In particular, $\chi(n)$ converges to zero as $n\to\infty$.

Lower bound for $\chi(n)$

It is very easy to give a crude lower bound for $\chi(n)$. Simply observe that $X_n \subset K_n$, and that $X_n = \ball{1/2}^{\times n}$ (here we do require the choice of $1/2$ in the definition of $X_n$). Therefore,
$$ \mu_n(K_n) \geq \mu_n(X_n) = \mu\bigl(\ball{1/2}\bigr)^n $$
and so

We make use of the rotational symmetry of $Z_{n,1}$. Choose a rotation on $\RR^d$ which takes the point $x$ to the point $(\norm{x},0,\ldots,0)$ on the first axis. Applying that rotation componentwise gives a measure preserving bijection between $P_x$ and $P_{(\norm{x},0,\ldots,0)}$, and we see that
$$ \mu_{n-1}(P_x) = \mu_{n-1}\bigl( P_{(\norm{x},0,\ldots,0)} \bigr) \label{(1)} $$

I'm going to show that, for any $x>2^{-d}$, this is $O(x^{n})$. By Ricardo's lower bound, this is tight.

Given a set of $n$ points of diameter at most $1$, take all the points you get from those points by rounding the coordinates up or down to multiples of $\epsilon$. This new set of points will contain the old set of points in its convex hull, and since the new set of points will be of a distance no more than $\sqrt{d}\epsilon$ from the old set of points, will have a diameter at most $1+2\sqrt{d}\epsilon$. Thus each configuration of points of diameter at most $1$ in a sphere of radius $1$ is contained in the convex hull of some configuration of $\epsilon$-lattice points of diameter at most $1+2\sqrt{d}\epsilon$ in a sphere of radius $1+\sqrt{d}\epsilon$. Let $N$ be the (finite) number of such configurations of lattice points. Then since the sphere has the greatest volume of any convex body with a given diameter, the convex hulls each take up at most $(1+2\sqrt{d} \epsilon)^d/2^d$ of the volume of the sphere, so landing in any hull has probability

\[ N \left(\frac{ 1 + 2 \sqrt{d} \epsilon}{2}\right)^{dn} \]

If we let $\epsilon$ go to $0$ we have the desired result.

To be more specific, we can set $\epsilon = n^{-1/(d+1)}$, and get the upper bound

A quick proof sketch that the ratio goes to 0: Let $a$ and $b$ be points in the unit ball of distance 2. (The existence of such does not hold in an arbitrary metric space!) As we add more and more points to our set in an i.i.d. manner, we will with probability 1 eventually find points $x$ and $y$ with $d(x,a)< 1/2$ and $d(y,b)< 1/2$. Then $d(x,y)>1$ or else
$$
d(a,b)\le d(a,x)+d(x,y)+d(y,b) < 1/2 + 1 + 1/2 = 2
$$
Finally, similarly to how the Strong Law of Large Numbers implies the Weak Law of Large Numbers, this gives that our ratio goes to 0.