I asked this question in math.stackexchange but I didn't have much luck. It might be more appropiate for this forum. Let $z_1,z_2,…,z_n$ be i.i.d random points on the unit circle ($|z_i|=1$) with uniform distribution on the unit circle. Consider the random polynomial $P(z)$ given by
$$
P(z)=\prod_{i=1}^{n}(z−z_i).
$$
Let $m$ be the maximum absolute value of $P(z)$ on the unit circle $m=\max\{|P(z)|:|z|=1\}$.

How can I estimate $m$? More specifically, I would like to prove that there exist $\alpha>0$ such that the following holds almost surely as $n\to\infty$
$$
m\geq \exp(\alpha\sqrt{n}).
$$
Or at least that for every $\epsilon>0$ there exists $n$ sufficiently large such that
$$
\mathbb{P}(m\geq\exp(\alpha\sqrt{n}))>1-\epsilon
$$
for some $\alpha$ independent on $n$.

A routine computation shows (if I haven't made a computational error) that the average value of $|P(1)|$ on the unit circle is $(4/\pi)^n$.
–
Richard StanleyApr 30 '11 at 15:37

6

@Richard and Ori: Thanks! Both of you are right, the expectation $\mathbb{E}(|1-z_i|)=\frac{4}{\pi}$ therefore $\mathbb{E}(|P(1)|)$ grows exponentially. However, it seems to me that from this fact I can't prove what I'm looking for. Note that if we consider $\log(|P(z)|)$ the expectation of $\mathbb{E}(\log|1-z_i|)=0$ therefore $\lim_{n\to\infty}{\frac{1}{\sqrt{n}}\log|P(1)|}\to \psi$ converges in distribution to $\psi$ which is a Gaussian zero mean and finite variance random variable. Therefore, $|P(1)|$ can take very large but also extremely small values equally likely!
–
ghtApr 30 '11 at 17:35

1

Hmmm, you're right. So I guess you'd want to estimate the correlation between $\log |P(1)|$ and $\log |P(e^{i\epsilon})|$ which leads us to your other questions.
–
Ori Gurel-GurevichMay 1 '11 at 5:55

4 Answers
4

Here is a more careful (EDIT: even more careful!) argument that gives an affirmative answer to the weaker version of the question (as stated in the edit to my previous post, I doubt that the stronger version is true).

The argument uses the following lemma, which ought to be known. If someone has a reference, please leave a comment.

Proof:
Let $f(X)$ denote $a_1X_1+\cdots+a_nX_n$.
In the Boolean lattice of all assignments of $\pm 1$ to the variables $X_1,\dots,X_n$, consider a random walk starting from the point where all $X_i$'s are $-1$, and moving in $n$ steps to the point where they are all $+1$, in each step choosing uniformly and independently of the history a variable which is $-1$ and changing it to $+1$.

What is the expectation of the number $N(I)$ of steps of this walk at which $f(X) \in I$? On one hand, $N(I)\leq 1+r$, since $f(X)$ increases by at least 2 in each step.

On the other hand, the probability that the walk passes through any given point in the Boolean lattice is at least $2^{-n}\sqrt{\pi n/2}$ (this probability is minimized at the middle level(s) of the lattice, and the claim follows by well-known estimates of the central binomial coefficient). Therefore
$$EN(I) \geq \frac{\#\{X:f(X)\in I\}}{2^n}\cdot \sqrt{\pi n/2} = Pr(f(X)\in I) \cdot \sqrt{\pi n/2}.$$

As was explained in the earlier post, we can randomly choose $n$ pairs of opposite points $\{z_i, -z_i\}$, then find $z$ with $\left|P(z)P(-z)\right|=1$ given only this information, and finally fix the $z_i$'s by $n$ independent coin flips.

In order to apply the lemma, we want to have, before the coin flipping, $n/2$ pairs $z_i, -z_i$ making an angle of say at most $60$ degrees with $z, -z$, so that each of the $n/2$ corresponding coin flips determine the sign of a term of at least $\log 3$ in $\log\left|P(z)\right| - \log\left|P(-z)\right|$. Actually, after choosing the $n$ pairs $z_i, -z_i$, this a. a. s. holds for every $z$. The idea is to divide the circle into, say, 100 equally large sectors. With high probability, every pair of opposite sectors will contain at least $n/51$ pairs (as opposed to the expected number, $n/50$).

We now condition on the outcomes of the coin flips for the smaller terms (pairs $z_i, -z_i$ more or less orthogonal to $z$). The lemma above tells us that for any interval $I$ of length $4\alpha\sqrt{n}$, the probability that $\log\left|P(z)\right| - \log\left|P(-z)\right| \in I$ is at most $4\alpha/\sqrt{\pi} + O(1/\sqrt{n})$.

In particular, with probability at least $1-O(\alpha)$, the absolute value of $\log\left|P(z)\right| - \log\left|P(-z)\right|$ is at least $2\alpha\sqrt{n}$, and since $\log\left|P(z)\right|=-\log\left|P(-z)\right|$, $\max\left(\log\left|P(z)\right|, \log\left|P(-z)\right|\right)\geq \alpha\sqrt{n}$ as required.

I have a couple of comments regarding Chandru's answer, but they're too long to fit in the comment box, so I'm making this a separate answer. First, the quantity
$$
F(P) = \int_0^{2\pi} \log|P(e^{i\phi})| d\phi
$$
is not non-negative without some further assumption. The easiest is to assume that the polynomial $P$ is monic. But as a trivial counterexample in general, one can take $P(z)=c$
with $|c|<1$. For Chandru's argument, somewhere one needs to use the assumption that $P(0)=1$ to make the conclusion that $F(P)\ge0$.

Second, the quantity $F(P)$ is essentially the logarithm of the classical Mahler measure, which is defined by
$$
\log M(P) = \frac{1}{2\pi} \int_0^{2\pi} \log|P(e^{i\phi})| d\phi.
$$
Factoring $P$ as
$$
P(z) = c(z-z_1)\cdots(z-z_n),
$$
it is an elementary classical computation to show that
$$
\log M(P) = \log|c|+\sum_{i=1}^n \log\max(1,|z_i|).
$$
In particular, if $|c|=1$, then $\log M(P)\ge0$.

@Joe Silverman: Dear Joe, please note that it's not my solution, the solution can be found in the book: "Contest's in Higher Mathematics" by G.Szekely.
–
S.C.Jul 16 '11 at 16:56

2

@Chandru: Okay, fair enough, but since you liked the lengthy solution enough to copy it into MO (instead of just providing a reference), you might want to edit it to fix that gap. The point is that the author appears to have claimed and proven that $F(P)\ge0$ without using the assumption that $P(0)=1$. What he's done is ignore the $\log|c|$ term coming from the leading coefficient and only shownn that the integral of the other $\log|z-z_i|$ terms are nonnegative. Hint: Use fact that $\log|c|+\sum\log|z_i|=\log|P(0)|=0$.
–
Joe SilvermanJul 16 '11 at 19:50

@Joe: But there, is a line which states: In our case, the relation P(0)=1 implies,....
–
S.C.Jul 16 '11 at 20:02

@Joe: which means that he has assumed $P(0)=1$.
–
S.C.Jul 16 '11 at 20:02

2

@Chandru: My apology. The first part of the proof is only a proof that the quantity $F(P)$ is well-defined. Since he starts by saying that he needs to prove that $F(P)$ is well-defined and non-negative, I had thought that's what he was doing. But you're right, it isn't until later he proves that $F(P)\ge0$ for his polynomials having $P(0)=1$. And he doesn't just use the existence of $F(P)$ there, he uses the explicit formula he gets for general $F(P)$. So I stand corrected, his proof is fine, albeit IMO not very well written.
–
Joe SilvermanJul 16 '11 at 20:09

I think $z$ should be chosen so that the deviations tend to go in the positive direction on all scales. The following approach seems to work: Suppose $n$ is a power of 2 (some fix is needed if it isn't). Suppose the points $z_i$ are sorted, say in counter-clockwise direction, and assume without loss of generality that we have $z_0 = z_n = 1$ (indexing modulo $n$). The points $z_0$ and $z_{n/2}$ split the unit circle in two sectors. We pick the larger of those two, and look at the point whose index is the mean of the indices at the endpoints (either $z_{n/4}$ or $z_{3n/4}$, the point that we expect to be near the midpoint of that larger sector). That point splits the sector in two, and we pick the larger of those two and continue. In the end we arrive at two consecutive points $z_i$, and we let $z$ be the midpoint of the sector between them.

It should now be possible to get a high-probability lower bound on $\log(P(z))$. I haven't done this in detail, but a simulation for $n=64,128,\dots,4096$ indicates that $\log(P(z))$ is rarely much smaller than $\sqrt{n}$. Since you ask for an idea that might be useful, I dare post this as an answer.

UPDATE:
Here's a simple argument that should almost, but not quite, give the desired bound. What it ought to show, although some precision in the analysis is still missing, is that for every $\epsilon>0$ there is an $\alpha>0$ such that $m\geq \exp(\alpha\sqrt{n})$ with probability at least $1-\epsilon$.

Here's how it works: Since the mean value of $\log\left|P(z)\right|$ is 0, we can always find a $z$ such that $\left|P(z)P(-z)\right| = 1$. Notice that $P(z)P(-z)$ is unchanged if we replace $z_i$ by $-z_i$. Therefore we can start by randomly generating $n$ pairs of diametrically opposite points $\{z_i, -z_i\}$, then find $z$ with $\left|P(z)P(-z)\right|=1$ given only this information, and finally fix the $z_i$'s by $n$ independent coin flips.

Now condition on the outcome of the first stage of the process, so that the $n$ pairs $\{z_i, -z_i\}$ are fixed. With high probability there is a bunch of say $n/2$ such pairs for which the quantity $\log\left|P(z)\right| - \log\left|P(-z)\right|$ is affected by at least some constant depending on a coin flip (this just requires $z_i$ to be substantially closer to one of $z$ and $-z$ than to the other). Therefore the standard deviation of $\log\left|P(z)\right| - \log\left|P(-z)\right|$ is at least some constant times $\sqrt{n}$, which means that $\max\left(\log\left|P(z)\right|, \log\left|P(-z)\right|\right)$ should be of order $\sqrt{n}$ most of the time.

I guess the argument can be made precise, but this choice of $z$ doesn't let us fix $\alpha>0$ and get a.a.s. the bound asked for.

Perhaps this helps to at least clarify the question. What the OP asks for is just beyond what we get with this argument.

EDIT: The more I think about it, the more I suspect that the first statement asked for in the OP is not true. Of course $\log\left|P(z)\right|$ will take large negative values when $z$ is extremely close to a $z_i$, but when it isn't, it seems that the irregularities in distribution of the $z_i$ on a smaller scale will be less significant than the large scale distribution. Roughly speaking this is because by the nature of the logarithm, the points $z_i$ close to $z$ will have only a mildly stronger influence on $\log\left|P(z)\right|$ than the points farther away. If the points $z_i$ happen to be unusually but not extremely uniformly distributed on the large scale, it seems that $\max\left(\log\left|P(z)\right|\right)$ need not be larger than any particular constant times $\sqrt{n}$.

If this is correct, it means that the values of $\log\left|P(z)\right|$ for different $z$ are significantly correlated even on a larger scale.
Needless to say, these speculations are still nothing close to a proof.

The second, weaker version, seems to be equivalent to what is claimed in the update above, and it should not be too hard to fill in the missing details.

Sounds reasonable - except that, how would you take randomness into account? It can well happen that the points $z_i$ are almost equally spaced, in which case nothing helps; so, it is impossible to prove anything if randomness is ignored.
–
SevaMay 2 '11 at 18:38

Seva, of course you are right. This is just a suggestion for how to choose $z$ given the points $z_i$, and a quick simulation indicates, although not conclusively, that it might be good enough for establishing the bound that the OP asks for.
–
Johan WästlundMay 2 '11 at 19:47

The UPDATE part is very convincing; a nice argument! It resembles (the solution of) an olympiad-type problem I heard a while ago: what is the probability that $n$ points, randomly chosen on the unit circle, leave an empty arc of length $\pi$?
–
SevaMay 3 '11 at 17:03

1

@Johan: Thank you. I thought that your argument was correct but now I'm having second thoughts. I agree with the statement: "With high probability there is a bunch of say $n/2$ pairs for which the quantity $\log|P(z)|−\log|P(−z)|$ is affected by at some constant depending on a coin flip. So if you change $z_i$ to $-z_i$ without changing any other pair then $\log|P(z)|-\log|P(−z)|$ will be affected by at least some constant. However, the same won't be true if I change two or more pairs simultaneously.Then, how do you get that the standard deviation is at least some constant times $\sqrt{n}$?
–
ghtMay 4 '11 at 19:43

1

Johan: After we conditioned on the $n$ pairs have $2^n$ possible elections for the final sequence of $z$'s all which are equally likely with probability $\frac{1}{2^n}$ . It seems to me that your argument only give us a lower bound for the variance of the form order $\frac{n}{2^n}$. How do you get $\sqrt{n}$? I'm sure that I am missing something, right? Please clarify. Thanks.
–
ghtMay 4 '11 at 21:50

I believe you can obtain very reasonable bounds for your problem using the following approach. (I myself was too lazy to carry out the computations.) Split the unit circle into the union of an interval $I$ of length $4\pi/(n\log n)$ and $N\sim \pi n/\log n$ intervals $J_k$ of length about $2\log n/n$ each. (You may need to adjust the logarithmic factors at the optimization stage.) Almost surely, the interval $I$ will not contain any point $z_i$, whereas each of the intervals $J_k$ will contain at most $3\log n$ points. Now choose your point $z$ to be in the middle of the interval $I$ and compute $|P(z)|$ for the worst-case scenario, where each of the two intervals $J_k$ abating to $I$ contains $3\log n$ points $z_i$, all of these points at the distance $2\pi/(n\log n)$ from $z$, the two "next" intervals $J_k$ also contain $3\log n$ points each at the minimum possible distance from $z$ and so on.

If the $z_i$'s are positioned with equal spacing, and $z$ is taken right between two consecutive points, then $\left|P(z)\right|$ is exactly 2. I don't see how an argument of this type could lead to a better bound than that. Even taking $z$ in the middle of the largest gap between $z_i$'s (which is of order $\log n/n$) and assuming the rest of the points are equally spaced seems insufficient.
–
Johan WästlundMay 2 '11 at 16:11

Seems you are right... Nevertheless, the argument I have outlined may be useful to show that one should not expect much and, typically, $\max |P(z)|$ is relatively small. In fact, this even simplifies the reasoning; the interval $I$ is not needed any longer - just split the circle into $N\sim n/\log n$ arcs, notice that a.s. each arc contains $\Omega(\log n)$ points, and so, for any choice of $z$, we have at least $\Omega(\log n)$ points within the distance $2\pi/N$ to the left of $z$ and at least $\Omega(\log n)$ points within the distance $2\pi/N$ to the right of $z$ etc.
–
SevaMay 2 '11 at 18:34