4 Answers
4

Interlacing often comes from a monotonicity formula: if $\hbox{arg}(f(x)/g(x))$ is monotone in x (assuming that $f$ and $g$ do not simultaneously vanish), then the zeroes of f and g will interlace, as can be seen geometrically by seeing how the curve $x \mapsto (f(x),g(x))$ winds around the plane.

The monotonicity of the phase is equivalent to the Wronskian $f g{'} - f{'} g$ of the two functions $f,g$ having a definite sign. I think one can prove the two classical interlacing properties you mentioned (as well as the orthogonal polynomial one Noam mentioned) by verifying this definiteness property (for the functions $f$ and $f{'}$ in the first case, and for the two characteristic functions in the second case).

But monotonicity formulae are ubiquitous throughout mathematics, and any two such formulae selected at random are unlikely to be related. So my feeling is that most pairs of interlacing formulae similarly have no strong connection to each other.

This is a common cause explanation -- akin to the correlation between people eating ice cream at the beach and people drowning at the beach on a given day (caused by there being more or less people on that day).
–
john mangualNov 20 '11 at 11:53

Anyway, it sounds like "$\partial$(interlacing) = monotonicity" and maybe even "$\partial$(monotonicity)=positivity". I should try these exercises with the topological strategy you have outlined.
–
john mangualNov 20 '11 at 12:31

Hmmm... if $f(x) = x^m$ then $(f')^2 - f f'' = m x^{2m}$ which could take either sign. Also if $f(x) = x^2+1$, then $(f')^2 - f f'' = 4x^2 - 2x^2 - 1$ which switches sign. This must be a result for polynomials, with real coefficients, all of whose roots are real :-/
–
john mangualNov 20 '11 at 15:49

Comment 2 is a bunch of speculation. Terry Tao basically you can get interlacing results topologically by looking at winding number of $h(x) = f(x) + ig(x)$ as $x$ runs through the reals if you can prove $h(x)$ is always moving (counter)clockwise. Honestly having a hard time proving these results without resorting to mean-value theorem. Somehow you need to use that all the roots are real in both cases.
–
john mangualNov 21 '11 at 3:36

This is fuzzy (and related to Brendan's fuzzy answer) but maybe someone will tease out more details or give a better explanation:

Let $f(x)$ have degree $n$ and all roots real. I will assume that $f$ is monic and that the roots are distinct. $$f=\prod_1^n(x-\alpha_i) \text{ with } \alpha_1 \lt \cdots \lt \alpha_n$$ (the case of repeated roots seems no harder but I don't want to make a careless misstatement. ) Then the scaled derivative $f'(x)/n$ is one monic polynomial with roots interlacing those of $f$.

What is a nice description of the set of polynomials with this property?

We could say that it is $$W=\lbrace\prod_1^{n-1}(x-\beta_i) \text{ with } \alpha_j \le \beta_j \le \alpha_{j+1} \rbrace$$ and let $S$ be the corresponding set with requirement that $\alpha_j \lt \beta_j \lt \alpha_{j+1}$ (the sets of weakly and strongly interlacing polynomials.) However that seems unsatisfying.

Consider the $n$ polynomials $$f_j=\frac{f(x)}{x-\alpha_j}$$ obtained by dropping one root. They are in $W$ and it seems intuitive that the set of convex combinations of these polynomials $$\sum{c_i}f_i(x) \text{ with } \sum c_i=1 \text{ and } c_i \ge 0$$ is $W$, that each polynomial in $W$ is a unique combination, and that $S$ arises from requiring all the $c_i$ positive. Of course $f'$ is the case that all $c_j=\frac1n.$ The coefficients range over a simplex and the derivative comes from the centroid. I will give an interesting justification below although it may not be the "right" one.

A connection is that the $n \times n$ diagonal matrix $D$ with diagonal entries $\alpha_i$ has characteristic polynomial $f(x)$ and the principal submatrix $D_{jj}$ has characteristic polynomial $f_j(x).$ Furthermore, any real symmetric matrix with these eigenvalues is $A=QDQ^T$ for an orthogonal matrix Q (whose rows are eigenvectors for $A$). There must be more to say about that but I won't.

Here is my justification of the statement about convex combinations. I include it because I am not sure it satisfies me (so maybe someone else will tell me what I should have seen) and because it seems to enhance the connection: It would be nice to have a direct recipe for the combination giving a desired polynomial $g$ with chosen roots. One case with something like that is the Lagrange interpolation formula although it gives arbitrarily chosen values at fixed points rather than the fixed value $0$ at arbitrarily chosen points. However the polynomials $f_i$ are the ingredients used by Lagrange interpolation to specify values of a polynomial at the $n$ places $x=\alpha_i$. A polynomial $g$ with the interlacing properties we desire is uniquely determined by the values it has at those $n$ points (which should alternate in sign) so we should take something like $c_j=\frac{\prod{\alpha_j-\beta_i}}{\prod \alpha_j-\alpha_k}$ where $1 \le i \le n-1$ and $1 \le k \le n$ with $k \ne j.$

Yeah, so maybe the link is stronger than Terry suggests. This would be the same as slicing the matrix $A$ with eigenvalues $\alpha_1, \dots, \alpha_n$ along a hyperplane. Those are the characteristic polynomials you get.
–
john mangualNov 23 '11 at 16:31

A rather tenuous connection, but I'll throw it out and hope somebody can build a better answer from this (in short, too long for a comment):

It is well known (or it should be!) that one can use the Sturm sequence construction for a polynomial $p(x)$ with all roots real and its derivative to construct a symmetric tridiagonal companion matrix (that is, the symmetric tridiagonal matrix whose characteristic polynomial is $p(x)$). See Fiedler's paper for details.

whose characteristic polynomial is $p_n(x)$. It can be seen that the characteristic polynomials of the leading $1\times1, 2\times 2, \dots$ submatrices are $p_1(x),p_2(x),\dots$ We see here the correspondence between the Sturm sequences for orthogonal polynomials and tridiagonal matrices.