Given a Dynkin diagram of a root system (or a Cartan Matrix), how do I know which combination of simple roots are roots?

Eg. Consider the root system of G_2, let a be the short root and b be the long one, it is clear that a, b, b+a, b+2a, b+3a are positive roots. But it is not clear to me that 2b+3a is a root just from the Dynkin diagram.

Oops: if you want the game to actually not terminate, then instead of incrementing in (2), replace the label c with the sum of the surrounding labels minus c. Everything else is fine in either version.
–
Allen KnutsonFeb 9 '10 at 15:03

I just played the game for $E_8$. Good times! BTW, if you do this (the first version you described) for affine $A_3$ or affine $E_8$ it looks to me like you get the fundamental imaginary root (and are then stuck). Does this happen in general? (If so, is it easy to see why?)
–
GSApr 29 '10 at 15:48

For non-simply laced root systems, what is the extra trick? Step 2) above checks if the root is non-dominant and if not increase in the obvious way (root string). In non-simply laced root systems there are two dominant roots however, one long, one short. So e.g. in B_2 (or B_3) I always get stuck at the sum of the simplest roots (= longest short root).
–
fherzigMay 14 '11 at 10:42

Actually it seems better to play this game in reverse, at least in the non-simply-laced case, starting with longest root (which I find easy to figure out by drawing the affine diagram and considering labels).
–
fherzigMay 14 '11 at 11:01

And even then one has to move further along the root string than just one step at a time: I just tried the case of C_3. (To go from "221" to "021".)
–
fherzigMay 14 '11 at 11:21

This question is answered, probably in many textbooks on Lie algebras, Chevalley groups, representation theory, etc.. I always think that Bourbaki's treatment of Lie groups and algebras is a great place to look (but I don't have it with me at the moment). I also tend to look things up in Humphreys, and Knapp's "Lie Groups, Beyond an Introduction"

Here's a method that's very good for pen-and-paper computations, and suffices for many examples. I believe that it will algorithmically answer your question in general as well.

The key is the "rank two case", and the key result is the following fact about root strings. I'll assume that we are working with the root system associated to a semisimple complex Lie algebra. Let $\Phi$ be the set of roots (zero is not a root, for me and most authors).

Definition:
If $\alpha, \beta$ are roots, the $\beta$-string through $\alpha$ is the set of elements of $\Lambda \cup \{ 0 \}$ of the form $\alpha + n \beta$ for some integer $n$.

Example: Let $\alpha$ be the short root and $\beta$ be the long root, corresponding to the two vertices of the Dynkin diagram of type $G_2$. The $\alpha$-string through $\beta$ consists of:
$$\beta, \beta+\alpha, \beta + 2 \alpha, \beta + 3 \alpha,$$
as remarked in the question.

Perhaps you already knew this, since you found those roots "clear". But from here, you can continue again with $\alpha$ and $\beta$ root strings through these roots.

For example, let's consider the $\beta$-string through $\beta + 3 \alpha$. These are the roots of the form $\beta + 3 \alpha + n \beta$, for integers $p \leq n \leq q$. We know that $p = 0$, since for $n = -1$ we find that $3 \alpha$ is not a root. The length of the root string is:
$$q = \frac{ -2 \langle \beta+3\alpha, \beta \rangle}{\langle \beta, \beta \rangle} = \frac{ -2(2 - 3)}{2} = \frac{2}{2} = 1.$$
From this we find that (corresponding to $n = q = 1$), there is another root $2 \beta + 3 \alpha$, and furthermore, $m \beta + 3 \alpha$ is not a root for $m \geq 3$.

By using root strings, together with bounds on how long roots can be, one can find all of the roots without taking too much time. It should also be mentioned that, for a simple root system, there is a unique "highest root", in which the simple roots occur with maximal multiplicity. The multiplicities of simple roots in the highest root can be looked up in any decent table, and computed quickly by hand for type A-D-E (using a trick from the McKay correspondence). This is useful for a bound, so you don't mess around with root strings for longer than necessary.

Looking at Fulton and Harris like Mariano suggests is a good idea, but here's another answer which might be helpful to think about.

In the simply-laced case, put an orientation on the edges. Then the positive roots are in bijection with indecomposable representations (over the complex numbers) of the corresponding quiver. More precisely, once we pick a way to label the nodes with simple roots, the dimensions of an indecomposable representation give the coefficients of a linear combination for a positive root. So for example, in type A, a positive linear combination of simple roots is a root only if its support is connected (otherwise you can write it as a direct sum of the two pieces). This also tells you that the coefficients have to be 1 if you play around with it.

For the non simply-laced case, we can reduce to the simply laced case via folding. So for example, $G_2$ is the folding of $D_4$ by the order 3 automorphism $\sigma$ which spins it around. The right replacement (for our purposes) for representations of a "$G_2$ quiver" are representations of $D_4$ which are invariant under $\sigma$. And then again positive roots correspond to the dimensions of the indecomposable $\sigma$-invariant representations (indecomposable considered as an object in the category of $\sigma$-invariant representations). See Hubery's paper http://www.ams.org/mathscinet-getitem?mr=2025328 for details.

So for example, the short root corresponds to the middle node in $D_4$, while the long root corresponds to the orbit of 3 nodes. Orient the edges of $D_4$ inward, and put ${\bf C}^3$ in the middle node with basis $e_1, e_2, e_3$. Set the outer nodes to be two dimensional, and set their images to be the subspaces $\langle e_1, e_2 \rangle$, $\langle e_2, e_3 \rangle$, and $\langle e_3, e_1 \rangle$ respectively. Then this representation is $\sigma$-invariant and has no $\sigma$-invariant summands (though it is decomposable as a $D_4$-representation. This corresponds to the root $3a+2b$.

I remember writing for fun a Fortran program (later translated to C) to compute the roots given a Cartan matrix. The algorithm has the virtue that if you input the Cartan matrix of an affine Kac-Moody algebra, say, it will still compute the roots level by level, except that, of course, it will never stop running :)

If you ever find yourself in the situation where you want to rule out some vector being a root in a hurry, you can check it against the other roots you do know to see if you get a Cartan integer out of their dot products.