There are different ways of showing that a given sequence $a_0,a_1,a_2,\dots$
of integers, say, is nonnegative. For example, one can show that $a_n$ count
something, or express $a_n$ as a (multiple) sum of obviously positive numbers.
Another recipe is manipulating with the corresponding generating series
$A(x)=\sum_{n=0}^\infty a_nx^n$ and showing that $A(x)\ge0$ (this is the
notation for a series which has all coefficients nonnegative, and this extends
to formal power series in as many variables as needed).

An example of criterion in this direction is
$$
(*) \qquad
A(xy)\ge0 \iff \frac1{(1-x)(1-y)}A\biggl(\frac{xy}{(1-x)(1-y)}\biggr)\ge0
$$
(the multiple $1/(1-x)(1-y)$ is introduced for cosmetic purposes only and, of course, both $A(x)\ge0$ and $A(xy)\ge0$ are by definition equivalent to the nonnegativity of the sequence $a_n$).
The latter can be verified by comparing the corresponding coefficients
in the power series expansion
$$
\frac1{(1-x)(1-y)}A\biggl(\frac{xy}{(1-x)(1-y)}\biggr)
=\sum_{n=0}^\infty a_n\sum_{k,m=0}^\infty\binom{n+k}n\binom{n+m}nx^{n+k}y^{n+m}.
$$

On the other hand, the one-dimensional version of $( * )$,
$$
A(x)\ge0 \iff \frac1{1-x}A\biggl(\frac{x}{1-x}\biggr)\ge0,
$$
is simply false.

My question is whether it is possible to find two nontrivial rational functions
$p(x)\in\mathbb Q[[x]]$ and $r(x)\in x\mathbb Q[[x]]$ in one variable $x$
such that
$$
A(x)\ge0 \iff p(x)A\bigl(r(x)\bigr)\ge0.
$$
Although I am not supposed to put several problems in one question, I
would also ask about a more direct proof of $( * )$ and about general ways
of constructing such $p$ and $r$ in more than one variable.

Motivation. Basically I am interested in proving nonnegativity of certain
$q$-series sequences $a_0(q),a_1(q),a_2(q),\dots$ by manipulating with the
corresponding generating series $A_q(x)=\sum_{n=0}^\infty a_n(q)x^n$. Some
of them can be "guessed" from non-$q$-versions, for example there is a neat
$q$-analogue of the criterion $( * )$.

Hi, Wadim. So you sometimes do "positivity" of series, same as the article I sent you. Let me be sure: (*) is true and really does follow from the expansion with the binomial coefficients? Then the next one with just $$ \left( \frac{x}{1-x} \right) $$ is false, so I suggest you switch to (But it is not!) Do you have an example of falsity for this one?
–
Will JagyJul 16 '10 at 1:16

Hi Will, yes I do (sometimes) positivity. :-) $( * )$ does follow by comparing the coefficients of $x^Ny^M$ when you fix successively $N=0,1,\dots$ and choose the corresponding $M$ sufficiently large. As for a 1-variable counterexample, take $A(x)=1+x-x^2+x^3$; then $A(x/(1-x))/(1-x)\ge0$.
–
Wadim ZudilinJul 16 '10 at 1:37

I agree with Will that the sentence "but it does not" is confusing and should be changed to "but it is not."
–
Charles StaatsJul 16 '10 at 2:05

2 Answers
2

It is impossible, and not just for rational functions. To see this, let's consider the coefficients $b_n$ of $p(x) A(r(x))$ as functions of the $a_n$, the coefficients of $A(x)$. Since $r(0) = 0$ (as it must be) we see that:

Each $b_n$ is a linear combination of $a_0, \dots, a_n$; i.e. we have an upper-triangular infinite matrix $F$, not depending on $a_n$, such that (writing $\vec{a} = (a_n), \vec{b} = (b_n)$) $\vec{b} = F \vec{a}$. Suppose for now that $p(0), r'(0) \neq 0$; then $F$ has a nonzero diagonal, and so is invertible.

If $\vec{b} \geq 0$ for all $\vec{a} \geq 0$, then in particular this is true of the columns of $F$, taking $\vec{a}$ to be infinite "basis" vectors. Conversely, we have equivalently that $\vec{a} = F^{-1} \vec{b}$, so if $\vec{a} \geq 0$ for all $\vec{b} \geq 0$ this must be true of the columns of $F^{-1}$. We conclude that both $F$ and $F^{-1}$ have nonnegative entries.

Lemma in linear algebra: if $F$ is upper-triangular and both it and $F^{-1}$ have nonnegative real entries, then $F$ is diagonal. Proof by induction: true for $1 \times 1$ matrices vacuously. In general, by induction we may assume that the upper-left and lower-right $(n - 1) \times (n - 1)$ blocks of $F$ are diagonal, so only the $(1,n)$ entry of $F$ is nonzero off the diagonal. Then we have $(F^{-1})_{1n} = -F_{1n} F_{nn}/F_{11}$. Since $F_{11}$ and $F_{nn}$ are both positive, $F_{1n}, (F^{-1})_{1n} \geq 0$ implies $F_{1n} = 0$.

This is also true of infinite matrices, since we can compute the finite upper-left blocks independently of the rest of the matrix.

However, if $F$ is diagonal then we see that $p(x)A(r(x)) = \sum a_n p(x) r(x)^n = \sum F_{nn} a_n x^n$ for all choices of $a_n$, so (for example, taking $a_n = t^n$ for a new variable $t$ and rewriting both sides as power series in $t$) we have $p(x) r(x)^n = F_{nn} x^n$ for all $n$ (and some $F_{nn} > 0$). That is, $(r(x)/x)^n = F_{nn} p(x)^{-1}$ for all $n$, so in fact $r(x)/x = F_{11}/F_{00}$ is constant, and finally, $p(x)$ is constant as well.

Now we remove the assumptions that $p(0), r'(0) \neq 0$. If $x^k$ divides $p(x)$, then replacing $p(x)$ by $p(x)/x^k$ does not change positivity of the coefficients. Now suppose the bottom exponent of $r(x)$ is $x^m$ with positive coefficient; then in $\mathbb{R}[[x]]$ we can write $r(x) = s(x)^m$, and if we denote $A_m(x) = A(x^m)$, we have $A(r(x)) = A_m(s(x))$. Clearly, $A_m$ has nonnegative coefficients if and only if $A$ does, and $s'(0) \neq 0$, so the previous proof applies and $s(x)$ is a multiple of $x$, i.e. $r(x)$ is a multiple of $x^m$, and $p(x)$ is a multiple of $x^k$.

Ryan, thank you very much for your computation (it will take time for me to follow it in details). I took $p_0=r_1=q(x)=1$, so that $p(x)=1+x$, $r(x)=x-x^2$, and computed $p(x)A(r(x))$ for $A=\sum_{j=0}^4a_jx^j$. The coefficient of $x^8$ in the resulting polynomial is $-3a_4<0$.
–
Wadim ZudilinJul 22 '10 at 6:04

Hmmm. Looking back over the argument I realize that it is not possible for $F$ to be diagonal, since that would necessitate $p(x) r(x)^n \in \mathbb{Q}$ for all $n$, an impossibility. I will think on this and replace the above answer with something correct tomorrow, when I am more awake.
–
Ryan ReichJul 22 '10 at 6:38

No hurry, Ryan, and thanks again. I was thinking of the problem hard before posting it. The expectation is "no" which is probably harder than giving at least one (nontrivial) example.
–
Wadim ZudilinJul 22 '10 at 6:43

It is definitely "no". My mistake (which betrays the fact that I am NOT a combinatorialist) is that I forgot all the binomial coefficients :)
–
Ryan ReichJul 22 '10 at 13:08

I haven't gone through your solution yet, but I'm wondering where do you lose answers like $p(x)=x^3$ and $r(x)=x^2$?
–
Gjergji ZaimiJul 22 '10 at 17:34

Just a quick observation. It is not hard to see that this is impossible if there exists some $i$ such that for all $j \ge i$, the coefficients of $p(x) r(x)^j$ are all positive after the first positive coefficient. (This occurs, for example, whenever $p, r$ themselves have this property.) This is because by induction one can take $A(x) = a_i x^i - x^{i+1} + a_{i+2} x^{i+2} + ...$ where $a_i, a_{i+2}, ... $ can be chosen to be large enough so that $p(x) A(r(x)) \ge 0$. In general some weird things happen that you might be able to fix with the Skolem-Mahler-Lech theorem, and I suspect that when $r$ is a polynomial it should always be possible to find a counterexample.

Thanks a lot, Qiaochu! I didn't have your argument to exclude the "eventually" positive ($j\ge i$) case of $p(x)r(x)^j$, but of course I suspect that only "trivial" pairs $p(x),r(x)$ do the job. I am just wondering whether the problem was studied... Can you provide some details on what do you mean by the SML theorem?
–
Wadim ZudilinJul 18 '10 at 2:44

If r is not a polynomial, the coefficients of p(x) r(x)^j will be nonzero infinitely often for sufficiently large j. Unfortunately, they can also be zero infinitely often, so changing the coefficients of A isn't necessarily enough to give you control over the coefficients of p(x) A(r(x)). The SML theorem tells you what kind of control you have and I think one could argue as above, but more carefully.
–
Qiaochu YuanJul 18 '10 at 2:55