which comes from the adjacency matrix of a graph corresponding to a one-dimensional chain of $N$ nodes with dangling ends. A cartoon of this graph is $$\circ -\circ -\circ -\circ -\cdots-\circ -\circ$$

It turns out that if you plot a histogram of its eigenvalues, it appears to fit exactly with an arcsine distribution $$f(x) = \frac{1} {\pi \sqrt{4-x^2}}, \vert x \vert < 2 $$ which is exactly what one would expect from the free convolution of the binomial distribution $$ p(x) = \frac 1 2 \left( \delta\left(x-1\right) + \delta \left(x+1\right)\right)$$ with itself.

Is this mere coincidence, or evidence of something deeper? I feel like this must be some example of a known result out there.

where $\sigma_x$ is the Pauli sigma matrix which of course has eigenvalues $\pm 1$. It must be that these two matrices are freely independent in the $N\rightarrow \infty$ limit, and possibly even for finite $N$ also, so that this reduces to the free convolution described above.

I may be reading too much into this, but it's interesting to me that this is a completely deterministic matrix problem with free probabilistic characteristics. I'm not at all familiar with the algebraic aspects of free probability theory, let alone what the graph theoretic relationships would be.

The discrete Laplacian on a chain approximates the Laplacian on $[0, 1]$, so its eigenvalues approximate frequencies of standing waves with vanishing (?) boundary conditions. In fact using this idea one can explicitly write down eigenfunctions and eigenvalues.
–
Qiaochu YuanSep 25 '11 at 18:21

1

Why do you call o-o-...-o a cartoon of a graph? Isn't it a legitimate presentation of a graph?
–
Hans StrickerSep 25 '11 at 22:46

3 Answers
3

I believe the relation between deterministic graphs and free probability you mentioned is not something generic. In fact, the main property of your matrix $M$ which makes connection with free probability (at the best of my knowledge) is not to be the adjacency matrix of some graph, but a Jacobi matrix related to some orthogonal polynomials, which themselves come from random matrix models.

Let me try to develop : We first need some computations.

We have to assume $N$ even to make things properly. The Chebychev (monic) polynomials $(T_k)_{k\geq0}$ of the first kind satisfy $T_k(x)=x^k+\ldots$ and are orthogonal for the weight
$$
w(x)=\frac{1}{\sqrt{1-x^2}},
$$
defined on $[-1,1]$, namey for any $k\neq l$
$$
\int_{-1}^1T_k(x)T_{l}(x)w(x)=0.
$$
Their Jacobi matrix (associated with its recurrent coefficients) is actually $M/2$. For our purpose its enough to know that the zeros of $T_N$ are actually the eigenvalues of $M/2$.
Moreover, a formula due to Heine yields
$$
T_N(x)=\int_{-1}^1\ldots\int_{-1}^1\prod_{i=1}^N(x-x_i)\prod_{1\leq i < j \leq N}|x_i-x_j|^2\prod_{i=1}^Nw(x_i)dx_i.
$$
The change of variables $x_i=\cos\theta_i$ gives
$$
T_N(x)=\int_{0}^{2\pi}\ldots \int_0^{2\pi}\prod_{i=1}^N(x-\cos\theta_i)\prod_{1\leq i < j \leq N}|\cos\theta_i - \cos\theta_j|^2\prod_{i=1}^Nd\theta_i
$$
and by Weyl formula
$$
T_N(x)=\int_{\mathcal{U}_N}\det(xI_N-\frac{U+U^*}{2}) dU = \mathbb{E}_{Haar}\Big(\det(xI_N-\frac{U+U^*}{2})\Big)
$$
where $dU$ stands for the Haar measure of the unitary group $\mathcal{U}_N$.

Conclusion : The random matrix $U+U^*$, with $U$ distributed according to Haar, has for mean eigenvalues the zeros of $T_N(x/2)$, and equivalently the eigenvalues of $M$. Thus they should have the same limiting distribution as $N\rightarrow\infty$ as soon as that the limiting distribution of $U+U^*$ is deterministic.

One one hand, the limiting distribution of $M$ is indeed known to be the arcsine distribution (note it is also the limiting distribution of the zeros of $T_N$ as $N\rightarrow\infty$, which is known to minimize the logarithmic energy
$$
\iint \log\frac{1}{|x-y|}d\mu(x)d\mu(y)
$$
over all probability measure $\mu$ on $[-1,1]$, a classical statement in potential theory).

On an other hand, by the invariance property of the Haar measure, the distribution of $U+U^* $ is the same than $A+VAV^*$, with $V$ also distributed according to Haar, which is known to converge by Voiculescu Theorem towards $\mu_A\boxplus\mu_A$, where $\mu_A$ is the limiting distribution of your matrix $A$, namely $\mu_A=\frac{1}{2}(\delta_1+\delta_{-1})$.

I would agree that there is something special about the matrix structure that allows this correspondence. I was not aware of Heine's formula, thanks. This deserves to be accepted as an answer, although I wonder if there is something more interesting that points toward possible generalizations.
–
Jiahao ChenMay 11 '12 at 18:01

If $\phi_n$ denotes the characteristic polynomial of the path on $n$ vertices
then
$$
\phi_{n+1}(t) = t\phi_n(t) - \phi_{n-1}(t),
$$
from which you can show that
$$
\phi_n(2\cos(\zeta)) = \frac{\sin(n+1)\zeta}{\sin(\zeta)}.
$$
So your observation is not a surprise from a graph theoretical viewpoint. I have nothing useful to say about free probability.