Consider the structure $R^n$ consisting of $n\times n$
matrices over the reals $\mathbb{R}$, $n$-dimensional row
vectors, column vectors and real scalars, with the ordered
field structure on the scalars. Thus, we can add and
multiply matrices; we can multiply anything by a scalar; we
can multiply matrices by vectors (on the suitable side);
and we can add and multiply vectors of the suitable shape.

The corresponding matrix algebra language has four
variable sorts---scalars, matrices, row vectors and column
vectors---together with the rules for forming terms so that
these expressions make sense in any $R^n$. In this
language, you can quantify over matrices, vectors and
scalars, form equations (and inequalities with the
scalars), but you cannot quantify over the dimension. The
idea is that an assertion in this language can be
interpreted in any dimension, one $R^n$ at a time. You have to make asserrtions that do not refer to the dimension; the
language is making assertions about matrices and vectors in
some fixed but unspecified dimension.

My question is whether truth in this real matrix algebra
obeys a 0-1 law as the dimension increases, that is:

Question. Is every statement in the matrix algebra
language either eventually true or eventually false in
$R^n$ for all sufficiently large dimensions $n$?

To give some trivial examples:

the statement asserting matrix commutativity
$\forall A,B\, AB=BA$ is true in dimension $1$ but false in all
higher dimensions.

the statement that the dimension is at least 17, or at
most 25, or an odd number less than 1000, are all
expressible, since you can quantify over enough vectors
and the assertions that they are independent or that they
span are expressible. The truth values of these
statements all stabilize in sufficiently high dimension.

the assertion that a particular real number is an
eigenvalue for a matrix is expressible.

But it isn't clear to me how one could express, for
example, that the dimension is even. (Edit: Gerry and Ryan below have explained how this is easily done.)

In the previous question, Ricky inquired
whether there is a decision procedure to determine which
assertions of matrix algebra are true for all $n$. For any
particular $n$, then Tarski's theorem on the decidability
of real-closed fields shows that the theory of the
structure $R^n$ is decidable: when $n$ is fixed, we may
translate any statement about matrices and vectors into
statements about real numbers by talking about the
components. (We may also add to the language the functions
that map a matrix or vector to the value of any particular
entry, as well as $det(A)$ etc.)

If my question here has a positive answer, and the
stabilizing bound is computable from the formula, then this
would provide an affirmative answer to Ricky's question,
since we could just determine truth in a large enough
$R^n$.

Lastly, I don't think it will fundamentally change the
problem to work in the complex field, since the
corresponding structure $C^n$ with complex matrices and
vectors is interpretable in $R^n$. For example, I think we
could freely refer to complex eigenvalues.

Edit. The real case was quickly dispatched by Gerry and Ryan, below. Let us therefore consider the complex case. So we have for each dimension $n$ the structure $C^n$ with $n\times n$ matrices, row vectors, column vectors and complex scalars. The question is: Does the truth of every statement of matrix algebra stabilize in $C^n$ for sufficiently large $n$?

Ricky proposed that we add Hermitian transpose (conjugation on scalars) to the language. This would would also allow us to refer to the real scalars. If we expand the language so that we are able to define the class of real matrices and vectors, however, then we can still express Gerry's and Ryan's solutions for a negative answer here.

Edit 2. As in the comments, let us say that the truth set of a formula $\phi$ in the language is the set of $n$ for which $\phi$ is true in dimension $n$. These truth sets form a Boolean algebra, closed under finite differences. Which sets of natural numbers are realizable as truth sets? (Note that there are only countably many truth sets.) And how does it depend on the field?

I don't understand the bit about complex eigenvalues. The statement, for every square matrix there is a nonzero vector and a real number such that (matrix)(vector) = (number)(vector) is true in odd dimensions, not in even. How do you get around that?
–
Gerry MyersonAug 2 '10 at 1:12

2

It appears that my question is quickly getting a negative answer here. (MO is amazing!) But could you kindly post answers as answers?
–
Joel David HamkinsAug 2 '10 at 1:24

3

In view of a large number of representation-theoretic constructions giving arithmetic progressions, I'd like to pose an explicit question about more general case: $$ $$ Is there an algebraic structure which has irreducible representations precisely in dimensions $k^2, k\in\mathbb{N}$ s.t. its representation theory is expressible in the given language?
–
Victor ProtsakAug 2 '10 at 4:00

2

How sparse can the truth set be? Can you get the powers of two, for example?
–
Mariano Suárez-Alvarez♦Aug 2 '10 at 4:57

7 Answers
7

One can get arithmetic progressions as truth sets, as in Joel's comment. Pick non-negative integers $a$ and $b$, pick a finite group $G$ which has at least one representation of degree $a$. Then there is a formula expression the statement "the vector space is a $G$-module which is a sum of irreducible representations of degree $a$ and exactly $b$ trivial summands".

Later: For example, the irreps of $G=(\mathbb Z_3\times\mathbb Z_3)\rtimes\mathbb Z_3$ have degree 1 and 3. It is generated by two elements which have cube equal to the identity, and which commute with their commutator. For example, if we want dimensions to be divisible by $3$, we can say:

(uppercase letters are matrices, lowercase letters are vectors, greek letters are scalars, and commutators are group commutators) A model for this is a $G$ which does not have one-dimensional submodules. This works for other prime values of $3$.

Later: A vector space $V$ has a structure of $M_n(k)$-module iff $n\mid\dim V$. This can also be written in the language and it is much simpler that the first example!

I wonder if this isn't a more lowbrow way to achieve the same end; let $\omega$ be a primitive $n$th root of unity, and consider the statement, there are invertible matrices $A$ and $B$ such that $A^{-1}BA=\omega B$. This statement seems to be true if and only if the dimension is a multiple of $n$.
–
Gerry MyersonAug 2 '10 at 3:24

@Gerry: you'd have to express the condition "$\omega$ is an $n$th root of unity without writing $n$.
–
Mariano Suárez-Alvarez♦Aug 2 '10 at 3:26

@Mariano: but you would also have to produce a group with an irreducible representation of dimension $a$, for every $a$ you want to use. That is, for any $a$ you would have to use a different sentence, and Gerry can certainly write individual sentences expressing that $\omega$ is an $n$'th root of unity for particular $n$.
–
Ryan ReichAug 2 '10 at 3:37

The irreducible, finite-dimensional complex representations of the Lie algebra $\mathfrak{sl}_2 \oplus \mathfrak{sl}_2$ are all of the form $V \otimes W$, where $V$ and $W$ are irreducible representations of $\mathfrak{sl}_2$; both $V$ and $W$ may have any dimension (and there is a unique representation of each dimension, not that it matters). If we require that neither copy of $\mathfrak{sl}_2$ act trivially, then $\dim(V \otimes W)$ is necessarily a composite integer. In particular, $n$ is prime if and only if $\mathbb{C}^n$ does not admit such a representation of this Lie algebra.

Note that $\mathfrak{sl}_2$ is spanned linearly by three elements with well-known Lie brackets, so a representation of $\mathfrak{sl}_2$ can be given by six matrices and fifteen commutator relations; specifying that one copy is nontrivial is a matter of specifying that one of the pairs of three does not consist of all zero matrices.

Later: Using representations of $\mathfrak{sl}_2$, we can refer to the dimension of a vector space: let $V$ have an irreducible representation of $\mathfrak{sl}_2$, as expressed by operators $e, f, h$ with the usual relations. Then if $ev = 0$, the weight $l$ as in $hv = lv$ uniquely determines the dimension.

Here is an elaboration on the ideas in Mariano's second post and Victor's comments under it and elsewhere, inspired by one of Peter Shor's comments to the question itself.

We can say that a vector space $V$ is a direct sum of (a specified number) $k$ subspaces if there are $k$ orthogonal idempotent matrices $P_i = P_i^2$ such that $\sum_{i = 1}^k P_i = I$. Moreover, using this construction we can speak of the subspaces themselves, as the images of the $P_i$.

We can say that $V$ is a tensor product of (a specified number) $k$ spaces $W_1, \dots, W_k$ by asking that it admit an irreducible representation of $\mathfrak{sl}_2^{\oplus k}$, say with generators $e_i, f_i, h_i$ in the usual notation. Suppose more generally that we have expressed $V = \bigotimes W_i \oplus V_0$, where $V_0$ has "reference" dimension $n$ as expressed above. Then we can say that the $W_i$ all have dimension equal to that of $V_0$ by testing highest weights in $V$. In summary: given $n$, we can express $n^k$ for any $k$ in terms of representation theory.

Let $f \in \mathbb{N}[x_1, \dots, x_r]$ be any polynomial, $f(x) = \sum a_{i_1, \dots, i_r} x_1^{i_1} \dots x_r^{i_r}$; we can say that $N = f(n_1, \dots, n_r)$ if $\mathbb{C}^N$ can be written as a direct sum of subspaces $W_{i_1, \dots, i_r}$, each of which is the direct sum of $a_{i_1, \dots, i_r}$ copies of the tensor product $(\mathbb{C}^{n_1})^{i_1} \otimes \dots \otimes (\mathbb{C}^{n_r})^{i_r}$.

Finally, if $f, g \in \mathbb{N}[x_0, x_1, \dots, x_r]$, we can say that $f(n, x_1, \dots, x_r) = g(n, x_1, \dots, x_r)$ is solvable in positive integers $x_1, \dots, x_r$ if we have a vector space $V$ expressible as an above such decomposition for both $f(n, \bullet)$ and $g(n, \bullet)$.

As an example, if we want to compute the diophantine set $S$ defined by $x_0 x_1 + x_2 - x_0^2 x_1$, we ask for a direct sum decomposition of $\mathbb{C}^N$ into subspaces $V_0, W$; of $W$ into a direct sum of $W_1, W_2, U$; and of $U$ into into $V_0 \otimes W_1 \oplus W_2$ and $V_0^{\otimes 2} \otimes W_1$. Then $n = \dim V_0$ is in $S$.

Thus, for any diophantine set $S$, there is a formula in matrix algebra with one free variable, representing a projection matrix onto a subspace of dimension $n$, which has an interpretation in some $\mathbb{C}^N$ if and only if $n \in S$. This is not really the same as showing that $S$ is a "truth set", though.

I'm not sure of the etiquette here. Should I add this to my other answer? It is sort of unrelated.
–
Ryan ReichAug 2 '10 at 4:28

Here is something implicit in the description of the language that I don't understand: are we allowed to write formulas like $(\exists k, a_1,\ldots,a_k): \ldots$ If not, how can irreducibility be expressed in the given language?
–
Victor ProtsakAug 2 '10 at 4:33

@Victor, a module is reducible if there exists two non-zero irthogonal, commuting, idempotent linear maps which add up to the identity and commute with everything.
–
Mariano Suárez-Alvarez♦Aug 2 '10 at 4:34

As per my comment: you can definitely decide whether $n$ is even or odd, since it is if and only if $n$ is even that $A^2 = -I$ has a solution in an $n \times n$ real matrix $A$.

Here is how you can detect even complex dimension. If we have Hermitian conjugation, we can define a Hermitian matrix to be one $H$ such that $H^* = H$; any one is diagonalizable (with real eigenvalues, not that it matters). One can say that $H$ has distinct eigenvalues: if $Hv = \lambda v$ and $Hw = \lambda w$, then $v = \mu w$ for some $\mu$. Then $n$ is even if and only if there is a Hermitian matrix $H$ with distinct eigenvalues and a matrix $A$ such that for every eigenvector $v$ of $H$ we have an eigenvector $w$, with different eigenvalue, such that $Av = w$ and $Aw = -v$. (This describes $A$ as having the matrix $\bigl(\vcenter{\overset{\begin{smallmatrix} 0 & \;-I \end{smallmatrix}}{\begin{smallmatrix} I & \;\hphantom{-}0 \end{smallmatrix}}}\bigr)$, written in the eigenbasis of $H$.)

I don't follow your remarks about the basis. You can't quantify over sets of vectors, but just over vectors (and matrices and scalars).
–
Joel David HamkinsAug 2 '10 at 3:59

I only need one basis, so I don't have to quantify over sets of vectors. That is, $e_1, \dots, e_n$ is a real basis if for all real scalars $a_1, \dots, a_n$ we don't have $\sum a_j e_j = 0$, and if for all vectors $v$ we have real scalars $a_j, b_j$ such that $\sum a_j e_j + \sum i b_j e_j = v$. Then a matrix $A$ is "real" if each $A e_j$ is in the real span of the $e_j$ and $n$ is even iff there exists a real $A$ with $A^2 e_j = -e_j$ for each vector in the real basis.
–
Ryan ReichAug 2 '10 at 4:23

But what I don't see is that your basis statement is expressible in the language of matrix algebra independently of the dimension. For any fixed n it seems fine, but what we need is one statement in that language, whose truth varies as n increases.
–
Joel David HamkinsAug 2 '10 at 4:35

Fix an integer $n$. The dimension of a vector space $V$ is divisible by $n$ iff $V$ can be given the structure of a representation of the discrete Heisenberg group $H_n$ with central charge $1$. This is the Stone-von Neumann theorem. The multiplication table of $H_n$ is a finite length statement in our language, which is true in $n \mathbb Z$ and false otherwise.

Over an arbitrary field, you can decide whether $n$ is even or odd by testing whether there exists a matrix $A$ such that $\pm 1$ are not its eigenvalues and $A$ is conjugate to $A^{-1}$ (yes for even, no for odd).

One way to get the squares, as Victor asked in a comment, is the following: a simple module over $\mathfrak{sl}_2\oplus\mathfrak{sl}_2$ is of the form $V_n\otimes V_m$ (where $V_n$ is the $\mathfrak{sl}_2$-module of dimension $n+1$), and this has a submodule (for $\mathfrak{sl}_2$ acting diagonally) of dimension one exactly when $n=m$.

Damn, I missed it. Ironically, this is why I used the construction I did.
–
Ryan ReichAug 2 '10 at 5:06

By increasing the number of factors, we can get $f(\mathbb{N})$ for any monic polynomial $f$ with integer coefficients as the truth set. I can almost see how to get any diophantine set ($\iff$ recursively enumerable, by Matiyasevich) in this way.
–
Victor ProtsakAug 2 '10 at 5:12

Mariano: for cubes, consider reps of $\mathfrak{sl}_2^{\oplus 3}$ whose restriction on the diagonal $\mathfrak{sl}_2$ in each pair of factors contains a trivial submodule (pairs 12 and 23 are sufficient). By replacing "trivial mod" with "$m$-dimensional simple mod", you can get off the pure powers. Basically, you can first get dim $n_1\ldots n_k$ from the direct sum of $k$ copies of $\mathfrak{sl}_2$ and then specialize $n_i$ or the difference $n_i-n_j,$ etc, to a chosen natural number. (That only produces pols that split over $\mathbb{Z}$, I'm not sure how to tweak it to get the rest.)
–
Victor ProtsakAug 2 '10 at 6:18

Here's an idea. I think it works, but it should be checked by people who understand representation theory better than I do. It's inspired by the reference given in Ito's answer to Ricky Demer's question, but since I don't really understand the reference, I can't tell whether it's the same construction or not.

We can express the statement (for fixed $k$): there are $k$ projection matrices $P_1$, $P_2$, $\ldots$, $P_k$ so that $\sum_j P_j = I$. Now, suppose we have a polynomial, say $Q(x,y,z) = \sum_{j=1}^k x^{\alpha_j} y^{\beta_j} z^{\gamma_j}$ (here, I'm using 3 variables only to reduce the number of subscripts).
I want to claim that we can find a set whose truth set takes on values $x^{\alpha_m} + y^{\beta_m} + z^{\gamma_m} + Q(x,y,z)$ for $x \neq y \neq z$, where $\alpha_m = \max_j \alpha_j$, etc. First, we say our space is the sum of $k+3$ subspaces by finding $k+3$ projection matrices as above. Now, we say the first space has dimension $x^{\alpha_m}$ by representing it as a module over $\mathfrak{sl}_2^{\alpha_m}$, with appropriate submodules acting diagonally, as suggested by Mariano and Victor. We should be able to write down similar equations which show that the $j+3$'rd space has dimension $x^{\alpha_j} y^{\beta_j} z^{\gamma_j}$, for some $x$, $y$, $z$. Now, we want to require that the $x$ appearing in the $j+3$'rd space is the same as the $x$ appearing in the first space. I want to do this by saying that there's a subspace of the $j+3$'rd space which is also a module over $\mathfrak{sl}_2^{\alpha_j}$, and that there's an involution between the first subspace and this subspace of the $j+3$'rd space which preserves this module structure.

I think this will work ... could people check it?

If it does work, it shows the question is undecidable, because we can use the same structure to get a diophantine equation ... keep the first three projection matrices the same, find new projection matrices for the rest of the space, and write down equations which give a different polynomial.

UPDATE 2:

I misunderstood Victor's question. I'll leave the comments I wrote anyway.

(1) I imposed the condition $x\neq y \neq z$ because I was worried that if $x=y$, you could somehow use a space of dimension $x^{\max(\alpha_j, \beta_j)}$ rather than $x^{\alpha_j + \beta_j}$. But I think I was being stupid.

(2) A term $k x^\alpha y^\beta$ can be composed by adding $k$ terms, each one being $x^\alpha y^\beta$. Is this what you were asking?

(3) The first three spaces was for showing the problem is undecideable. We have two polynomials with coefficients in $Z^+$, $Q_1(x_1 \ldots x_3)$ and $Q_2(x_1 \ldots, x_3)$, and we want to know if there is a solution to $Q_1 = Q_2$ in the positive integers. Now, we do the above construction twice, with completely new variables except for the first three projectors $P_1$, $P_2$, and $P_3$. We use to these make sure the $x$ we substitute in $Q_1$ are equal to the x we substitute in $Q_2$. On further thought, we don't need these, either.

Yes, this basically works, but I don't understand the need for $x\ne y\ne z$ and the first 3 terms. Given any $k$ rings $R_1,\ldots,R_k$, you can form an idempotented ring $R=Re_1\oplus\ldots Re_k.$ Then an $R$-mod is a direct sum of $R_i$-mod for different $i.$ Now take $R_i$ to be a direct sum of several copies of $\mathfrak{sl}_2$ and by imposing conditions on restrictions to diag emb $\mathfrak{sl}_2$ as in my comment to Mariano's 2nd answer, you can get any polynomial in many variables with coeff in $Z_+.$ But I think that for diophantine, you need more: $Z_+$-values of pols w/coeff in Z.
–
Victor ProtsakAug 2 '10 at 16:07

I have addressed this in an edit to my second answer.
–
Ryan ReichAug 2 '10 at 16:22

Peter, here is what I was getting at: as you vary $f\in Z[x]$ over all polynomials with integer coefficients in any number of variables, their "image sets" $f(Z_+^n)\cap Z_+$ $\equiv$ diophantine subsets of $Z_+$ $\equiv$ recursively enumerable subsets of $Z_+$. I think that every image set is a truth value set. Your construction with idempotents shows how to get the image set of any sum of monomial terms (my embellishment of Mariano's construction showed how to get the terms themselves), i.e. how to realize the image set of any $f\in Z_+[x]$ as a truth set. Q: How to do it for $f\in Z[x]?$
–
Victor ProtsakAug 2 '10 at 20:33

I think I see what you're getting at. You're asking whether all diophantine subsets of $Z_+$ are realizable as truth value sets? I'm not sure they are. Let's think of the diophantine subsets as those outputs which can be generated by a Turing machine $T$ for some input. Now, consider the function $f(k) = $ smallest input for which $T$ outputs $k$. Suppose there is a recursively enumerable set for which $f(k)$ grows faster than any computable function. Then the machinery to output $k$ must be contained in a space of dimension $k$. But doesn't Tarski's theorem say this is impossible?
–
Peter ShorAug 2 '10 at 21:09

1

Put more simply (I should learn to think before posting) truth value sets are recursive, since you can use Tarski's theorem to tell whether a number $k$ is in them. Diophantine sets need not be recursive, just recursively enumerable. However, even though we can't get all diophantine sets, the question of whether the truth value set for a statement is empty is still undecideable.
–
Peter ShorAug 2 '10 at 21:22