Sounds nice: the corresponding diagonal quadratic forms are rationally equivalent. Reminds me a little of mathoverflow.net/questions/22399. Is there a link to the "high-powered machinery" proof?
–
Wadim ZudilinMay 31 '10 at 0:53

Is anything known about the rational congruence of matrices? Clearly, for two diagonal matrices to be rationally congruent, their determinants must be equal up to multiplication by a rational square. Is it possible that this is sufficient?
–
Peter ShorMay 31 '10 at 2:20

1

It's not sufficient: the question asks about two rational quadratic forms to be isometric. The square class of the determinant is the discriminant of the quad. form, but there are other invariants. Ex.: diag(3,3) not isometric to diag(1,1). The main theorem on rational quadratic forms is Hasse-Minkowski (see wikipedia).
–
fherzigMay 31 '10 at 2:27

3

Indeed, it is necessary and sufficient that the discriminants are equal, the signatures are equal, and for all primes $p$ the Hasse-Witt invariants at $p$ are equal. For two particular quadratic forms $A$ and $B$, there are certainly algorithms to compute all these invariants. For an infinite family as in this case, more work and/or cleverness may be required.
–
Pete L. ClarkMay 31 '10 at 2:33

4 Answers
4

Wadim, isn't that 95% of the proof? First let me correct your first displayed equation (thanks to fherzig for pointing this out): It is not sufficient for the proof, but
$$
\sum_{i=0}^{n-1}\binom{4n}{2i}P_i(t)P_i(s)
=\sum_{i=0}^{n-1}\binom{4n}{2i+1}\hat P_i(t)\hat P_i(s)
$$
is, where $t$ and $s$ are two independent variables.

Let me rename your $P_i$ as $Q_{2i}$ and your $\hat{P_i}$ as $Q_{2i+1}$, so that your equation
$$
\sum_{i=0}^{n-1}\binom{4n}{2i}P_i(t)P_i(s)
=\sum_{i=0}^{n-1}\binom{4n}{2i+1}\hat P_i(t)\hat P_i(s)
$$
becomes
$$
\sum_{i=0}^{2n-1}\left(-1\right)^i\binom{4n}{i}Q_i(t)Q_i(s)=0.
$$
Now let $Q$ be the polynomial $Q\left(t\right)=t^{n-1}$ (I suspect the below proof works with any polynomial $Q$ of degree $n-1$ (not less!), but I'm not completely sure and don't have the time to check) and let $Q_i\left(t\right)=\left(2n-i\right)Q\left(t-\left(2n-i\right)^2\right)$ for every $i\in\mathbb Z$. Being a polynomial in $i$ of degree $2\left(2\left(n-1\right)+1\right)<4n$ (for fixed $t$ and $s$), the term $Q_i\left(t\right)Q_i\left(s\right)$ satisfies
$$
\sum_{i=0}^{4n}\left(-1\right)^i\binom{4n}{i}Q_i(t)Q_i(s)=0,
$$
since the $4n$-th finite difference of a polynomial of degree $< 4n$ is zero. Due to the symmetry of the function $i\mapsto Q_i\left(t\right)Q_i\left(s\right)$ around $i=2n$, and due to $Q_{2n}\left(t\right)=0$, this becomes
$$
\sum_{i=0}^{2n-1}\left(-1\right)^i\binom{4n}{i}Q_i(t)Q_i(s)=0.
$$
Now it remains to prove that each of the families $\left(Q_1,Q_3,...,Q_{2n-1}\right)$ and $\left(Q_0,Q_2,...,Q_{2n-2}\right)$ spans the space of all polynomials in $t$ of degree $< n$. This is a particular case of a more general fact: If $x_1$, $x_2$, ..., $x_n$ are $n$ pairwise distinct reals, then the polynomials $\left(t-x_1\right)^{n-1}$, $\left(t-x_2\right)^{n-1}$, ..., $\left(t-x_n\right)^{n-1}$ are linearly independent. In order to prove this, assume that they are linearly dependent, take their derivatives of all possible orders, evaluate at $t=0$, and get a contradiction because Vandermonde's determinant is nonzero.

Darji, this is indeed a wonderful proof, and much more than 50% (my post was just a reformulation of the original problem). It should also work for the second case. Thanks a lot for giving me rest of the problem! :)
–
Wadim ZudilinJun 1 '10 at 11:51

That's a nice solution. So one can describe an explicit change of basis between the two quadratic forms. A couple of typos: it should say $Q_{2i+1}$ when you rename near the top. Also $2(2(n-1)+1) = 4n-2$ (!) and this is less than $4n$. You really need $< 4n$, since otherwise the alternating sum you wrote wouldn't vanish.
–
fherzigJun 1 '10 at 20:18

That a great argument, Darij. One additional comment... Tim posed the problem over the rational numbers, but the theorem works over every field of characteristic $\ne 2$. (If char = 2, then you see immediately that it is wrong.) I guess your proof requires char $=0$ or $>n$.
–
SkipJun 2 '10 at 15:17

Expecting to be criticised, I nevertheless try to explain an approach to the problem
(for the first case). Assume that there exist two family of polynomials
$P_i(t)$ and $\hat P_i(t)$, $i=0,1,\dots,n-1$, each spanning the space of
polynomials of degree less than $n$ such that
$$
\sum_{i=0}^{n-1}\binom{4n}{2i}P_i(t)^2
=\sum_{i=0}^{n-1}\binom{4n}{2i+1}\hat P_i(t)^2
$$
identically in $t$; this, for example, happens if the both sides are equal
for $n$ distinct points. Then replacing the monomials $t^0,t^1,\dots,t^{n-1}$
by variables $t_0,t_1,\dots,t_{n-1}$ we obtain the linear forms
$L_i(t_0,t_1,\dots,t_{n-1})$ and $\hat L_i(t_0,t_1,\dots,t_{n-1})$
such that $L_i(1,t,\dots,t^{n-1})=P_i(t)$ and $\hat L_i(1,t,\dots,t^{n-1})=\hat P_i(t)$,
$i=0,1,\dots,n-1$, and
$$
\sum_{i=0}^{n-1}\binom{4n}{2i}L_i(t_0,t_1,\dots,t_{n-1})^2
=\sum_{i=0}^{n-1}\binom{4n}{2i+1}\hat L_i(t_0,t_1,\dots,t_{n-1})^2,
$$
the desired equivalence. So far, I could only manage some examples
of the polynomial expansions, like
$$
\frac{(1+t)^{4n}+(1-t)^{4n}}{2t^{2n}}
=\sum_{i=0}^{2n}\binom{4n}{2i}t^{2i-2n}
=\sum_{i=0}^{n-1}\binom{4n}{2i}(t^{-2(n-i)}+t^{2(n-i)})+\binom{4n}{2n}
$$
$$
=\sum_{i=0}^{n-1}\binom{4n}{2i}(t^{n-i}-t^{-(n-i)})^2+\sum_{i=0}^{2n}\binom{4n}{2i}
$$
$$
=\biggl(t-\frac1t\biggr)\sum_{i=0}^{n-1}\binom{4n}{2i}P_i\biggl(t+\frac1t\biggr)^2+2^{4n-1},
$$
in other words,
$$
\sum_{i=0}^{n-1}\binom{4n}{2i}P_i\biggl(t+\frac1t\biggr)^2
=\frac{(1+t)^{4n}+(1-t)^{4n}-2^{4n}t^{2n}}{2t^{2n}(t-1/t)}.
$$
So, the question is whether we can write the rational function on the right-hand side as
$$
\sum_{i=0}^{n-1}\binom{4n}{2i+1}\hat P_i\biggl(t+\frac1t\biggr)^2.
$$

There is a problem with this approach (I refer to Wadim Zudilin's answer). At least I don't see how to get from the first displayed equation (involving the $P_i$) to the second (involving the $L_i$) in his post.

by considering the coefficient of $t_0t_2$ or of $t_1^2$.
Clearly, if the second equation were true it would imply the first. But the problem with the converse is that the first involves 5 degrees of freedom (the coefficients of $t^i$ with $0 \le i \le 4$), whereas the second has 6 (the coefficients of the $t_it_j$ with $i \le j$).

Edit: Here is a better example:

$(t^2+1)^2 -2 (t)^2 + 1^2 = (t^2-1)^2 + 2 (t)^2 + 1^2$,

but the two forms $diag(1,-2,1)$ and $diag(1,2,1)$ don't even have the same signature.

You are right. We must go for bilinear rather than quadratic forms. I'll fix my post according to this.
–
darij grinbergJun 1 '10 at 17:23

@fherzig: Thanks for pointing out this mistake. +1. I was trying to find this trick "polynomial-quadratic form" at some place but couldn't. Passing to bilinear forms is much more than fix! It's now 100% Darji's proof.
–
Wadim ZudilinJun 2 '10 at 0:29

This is far too long for a comment.
From a pantload of experience finding explicit rational congruences, I can suggest that this problem could be decided by actual formulas. In any case finding some specific matrices ought to be instructive. My difficulty is that the numbers have passed what I can manage with my C++ programs ( I do not have n=12 explicit), plus I never wrote anything for 4 by 4 or larger.

Well, if we multiply both sides of a rational congruence by the square of the LCM $k$ of the "denominators" we get an integral "congruence" $$ P^t A P = k^2 B.$$ where $P$ is integral. For $n=4,$ where $A$ is diag(1) and $B$ is diag(4) we just get $k=2$ and $P$ is diag(2).

The bad news for me is that $n=12$ began to run into bounds on my C++ program. So I wanted to show others how to find these matrices in computer languages with unbounded integers. The main things are that you should assume that, while there will be infinitely many possible "denominators" $k$ that work, both for computational and perceptual purposes it is worth investigating the smallest values $k= 1,2,3,\ldots$ first. Next, given a value $k,$ we do not attempt to vary all $n^2$ elements in the matrix $P.$ This is computationally infeasible. Instead, make a list of possible column 1's, then a list of possible column 2's, and so on. Once the $n$ lists of columns are complete, do what probably amounts to what they call "backtracking," meaning that you pick a column 1 ( meaning $c_1^t \; A \; c_1 = k^2 B_{11}$), then a column 2, if those are compatible so far ($ c_2^t \; A \; c_1 = 0 $) then pick a column 3, and so on.

As $A$ and $B$ are diagonal we automatically get multiple copies of essentially the same matrices with the only change being $\pm$ signs. But I am really impressed that there has been essentially one matrix $P$ for each $k.$ This tells me that there may be predictable patterns in the triple $(n,k,P),$ different for $n \equiv 0,2 \pmod 4$ but perhaps related in some way all the same.