What is the growth rate of the magnitude of the elements of $M^k$ as a function of $k$? It is definitely
exponential, but maybe the exponent is known?

Is it the case that eventually one element of $M^k$ dominates, as $k \rightarrow \infty$?
I have some ambiguous experimental evidence that this is the case, but because
of the exponential growth, exact computation is difficult, rendering my "evidence" tenuous
at best and perhaps worthless.

One can ask the same question for matrices whose elements are random reals in [-1,1],
or random 0's and 1's, or random choices among $\lbrace -1, 0, 1\rbrace$, ...
These question have likely been studied. Thanks for pointers and/or ideas!

4 Answers
4

The best results have been obtained for random matrices with Normally distributed entries, but some of them--like Wigner's Circular Law--extend to uniformly distributed entries. (Independence is crucial.) Wigner's law applies only to symmetric random matrices. Girko's Circular Law holds without the symmetry condition, but afaik requires Normally distributed entries. It says that for largish $n$ the eigenvalues are approximately uniformly distributed on a disk. For smaller $n$, before these asymptotics are reached, there is a preference for real eigenvalues extending beyond the disk. At any rate, these asymptotics will immediately give you the distribution of eigenvalues of positive integral powers of such matrices, especially when you consider that the probability all eigenvalues are distinct (and therefore the matrix is diagonalizable over $\mathbb{C}$) is one. For example, the largest eigenvalue of $M^k$ will be approximately $n^{k/2} \left( 1 - \frac{1}{2} (3 \pi / (2 n) ) ^ {2/3} \right) ^ k$.

@Joseph: That's interesting. It differs from the asymptotic formula for two reasons, the first being that asymptotic conditions likely haven't been reached by n=20, the second that you're dealing with uniform entries rather than normally-distributed ones. It could be interesting to repeat your experiments with normally distributed entries. If you do this, it would be best to match moments: that is, use a normal distribution of zero mean and variance of 1/3 in comparison to a uniform on [-1,1], which also has a variance of 1/3. (Notice how the variance effectively scales all eigenvalues.)
–
whuberOct 11 '10 at 17:06

As other have observed, the growth rate of $M^k$ is determined by the largest eigenvalue of $M$. I just want to note that for a nonnegative matrix, the largest eigenvalue is always
between the largest and smallest column sum.

That, in principle, could allow you to obtain upper and lower bounds by trying to bound how big or small the column sums usually get for the random matrix you described.

Powers of a matrix are more easily calculated by first diagonalizing it. Let $P, D$ be matrizes, $D$ being a diagonal matrix with $M = PDP^{-1}$, then $$M^k = PD^{k}P^{-1}.$$ The entries in D are the eigenvalues of $M$, so the entries in $M^k$ are growing exponentially with the rate of the logarithm of the largest eigenvalue of $M$, each entry being a linear combination of these eigenvalues.

So to point 2: No entry should dominate the others significantly.

This is only a partial answer.. maybe you can find something about the distribution of eigenvalues of random matrizes. =)

The following analysis treats only the particular case of symmetric matrices $M$, which can be diagonalized by
an orthogonal transformation. The key observation is that the diagonalizing
matrix $P$ and the eigenvalue matrix $D$ are statistically independent. The diagonalizing matrix
elements are $O(n)$ Haar distributed. Now, it is easy to see that every matrix element is a scalar product of
two $O(n)$ Haar distributed n-vectors weighted by the k-th power of the eigenvalues.

At large values of k, the maximal eigenvalue dominates all eigenvalues and the contribution from the diagonalizing matrix is fixed.
Thus the growth rate of all matrix elements is the natural logarithm of the maximal eigenvalue.

Moreover, the contribution of the maximal eigenvalue to the $(i,j)$ element is given by $P_{im} D_{mm}^k P_{jm}$
($D_{mm}$ is the maximal eigenvalue). Thus the matrix element for which the product $P_{im} P_{jm}$ is maximal dominates all matrix elements of the power matrix.
This element will be most likely on the diagonal because it is multiplied by the square of a Haar distributed element having a non-zero mean.
The off-diagonal elements are multiplied by the product of two different Haar distributed elements thus having a zero mean.