On one of my exams last year, we were given a problem (we chose five or six out of eight problems) on an exam, the goal of which was to prove the Bruhat decomposition for $GL_n(k)$. I was one of the two people to choose said problem. I gave a very long convoluted argument which although correct was really inelegant. I proved it more than once because I wasn't satisfied with my proof, and I figured out a somewhat slick contradiction argument based on maximizing leading zeroes of rows (number of zeroes before the pivot), but the proof was still a real mess.

Statement of the problem:

Let $G:=GL(V)$ for $V$ a finite dimensional $k$ vector space. Let $B$ be the stabilizer of the standard flag (these will be invertible upper triangular matrices), and let $W$ be the subgroup of permutation matrices.

Show that $G=\coprod_{w\in W} BwB$, where the $BwB$ are double cosets. That is, show that $G=BWB$.

Question:
Is there a slick proof of this fact maybe using "more machinery"? In particular, is there any sort of "coordinate-free" proof (Can we even define the Borel and Weyl subgroups without coordinates?)?

You can check out section 23.4 of Fulton and Harris for a direct proof for SL(V).
–
FaisalFeb 17 '10 at 3:15

5

This is very coordinate independent, but you should know that the Bruhat decomposition for $GL_n(k)$ is just "row reduction" from the first week of linear algebra. The rightmost $B$ is the "echelon form" of a matrix, the factor of $w$ accounts for rows having to be reordered, and the leftmost $B$ is the coefficient matrix of the Gauss-Jordan elimination algorithm.
–
Ryan ReichJun 21 '10 at 19:46

1

A warning: although in the case of the reductive algebraic group $G=GL_n,$ the Weyl group $W=N(T)/T\simeq S_n$ can be realized as a subgroup of $G$, for the simple algebraic group $SL_n,$ $W\simeq S_n$ cannot be realized as a subgroup of $G.$ So when we talk about Bruhat decomposition in semisimple algebraic groups, elements $w$ by themselves don't make sense, but the cells $BwB$ do (check that they are independent of the choice of coset representatives!).
–
Victor ProtsakJun 22 '10 at 5:13

7

I don't see any reason to beat up on dependable ol' row-reduction. I seem to remember a quote by a famous algebraist: "I think coordinate-free, I write coordinate-free, but when the chips are down I lock the door and compute like hell with matrices."
–
GSJun 22 '10 at 9:36

5

"We share a philosophy about linear algebra: we think basis-free, we write basis-free, but when the chips are down we close the office door and compute with matrices like fury." - Irving Kaplansky about himself and Paul Halmos.
–
Harry GindiJun 22 '10 at 17:09

6 Answers
6

You are asking to compute the double quotient $B\backslash G /B.$ This is the same as computing $G\backslash (G/B \times G/B)$. A point in $G/B$ is a full flag on $k^n$. So you
are trying to compute the set of pairs $(F_1,F_2)$ of flags, modulo the simultaneous action
of $G$.

Another way to think about $G/B$ is that it is the space of Borel subgroups (the coset of
$g$ corresponds to the conjugate $g B g^{-1}$, where $B$ is the upper triangular Borel
that was fixed in the statement of the question).
The passage from flags to Borels is given by mapping $F$ to its stablizer in $G$.

So you can also think that you're trying to describe pairs of Borels $(B_1,B_2)$,
modulo simultaneous conjugation by $G$.

Now recall that a torus $T$ in $G$ is a conjugate of the diagonal subgroup. Choosing
a torus in $G$ is the same as choosing a decomposition of $k^n$ as a direct sum of
1-dimensional subspaces (or lines, for short). (These will be the various eigenspaces of the torus acting
on $k^n$.) The diagonal torus corresponds to the standard decomposition of $k^n$
as $n$ copies of $k$.

Now a torus $T$ is contained in a Borel $B$ (let me temporarily use $B$ to
denote any Borel, not just the upper triangular one) if and only if the corresponding
decomposition of $k^n$ into a sum of lines is compatible with the flag that $B$ fixes, i.e. if the flag is given by taking first one line, than the sum of that one with a second, then the sum of those two
with a third, and so on. In particular, choosing a torus $T$ contained in a Borel $B$
determines a "labelled decomposition" of $k^n$, i.e. we may write
$k^n = L_1 \oplus \ldots \oplus L_n$, where $L_i$ is the $i$th line;
just to be clear, the labelling is chosen so that the corresponding flag is just
$L_1 \subset L_1\oplus L_2 \subset \cdots.$ (Again, to be completely clear,
if $T$ is the conjugate by $g \in G$ of the diagonal torus, then $L_i$ is the
translate by $g$ of the line spanned by the $i$th standard basis vector.)

Note that this labelled decomposition depends not just on $T$ (which only gives an
unlabelled decomposition) but on the Borel $B$ containing $T$ as well. (In more Lie
theoretic language, this is a reflection of that fact that a torus determines
a collection of weights in any representation of $G$,
while a choice of a Borel containing the torus lets you order the weights as well,
by determining a set of positive roots.)

Of course, $B$ will contain more than one torus; or more geometrically,
$k^n$ will admit more than one decomposition into lines adapted to the filtration
$F$ of which $B$ is the stabilizer. But if one thinks about the different possible
lines, you see that $L_1$ is uniquely determined (it must be the first step in
the flag), $L_2$ is uniquely determined modulo $L_1$ (since together with $L_1$
it spans the second step in the flag), and so on, which shows that any two tori
$T$ in $B$ are necessarily conjugate by an element of $B$, and the same sort
of reasoning shows that the normalizer of $T$ in $B$ is just $T$ (because
if $g \in G$ is going to preserve both the flag and the collection of lines, which
is the same as preserving the ordered collection of lines, all it can do
is act by a scalar on each line, which is to say, it must be an element of $T$).

Now a key fact is that any two Borels, $B_1$ and $B_2$, contain a common torus.
In other words, given two filtrations, we can always choose an (unordered)
decomposition of $k^n$ into a direct sum of lines which is adapted to both
filtrations. (This is an easy exercise.) Of course the ordering of the
lines will depend which of the two filtrations we use. In other words, we get a set
of $n$ lines in $k^n$ which are ordered one way according to the filtration
$F_1$ given by $B_1$, and in a second way according to the filtration
given $F_2$ by $B_2$. If we let $w \in S_n$ be the permutation which takes the first
ordering to the second, then we see that the pair $B_1$ and $B_2$ determines
an element $w \in S_n$. This is the Bruhat decomposition.

It wouldn't be hard to continue with this point of view to completely prove
the claimed decomposition, but it will be easier for me (at least notationally)
to switch back to the $B\backslash G/B$ picture.

We thus find that $b^{-1} g b' \in N(D)/D$, and thus that $g \in B w B$ for
some $w$ in the Weyl group $N(D)/D$. Note that since $b$ and $b'$ are
well defined modulo $D$, the map from $T$ to $w$ is well-defined.

Thus certainly $G$ is the union of the $B w B$. If you consider what I've already
written carefully, you will also see that the different double cosets are disjoint. We can also prove this directly as follows: given $B$ and $g B g^{-1}$, the map
$T \mapsto w$ constructed above is a map from the set of $T$ contained in $B \cap
g B g^{-1}$ to the set $N(D)/D$. Now any two such $T$ are in fact conjugate
by an element of $B \cap g B g^{-1}$. The latter group is connected,
and hence the space of such $T$ is connected.
(These assertions are perhaps most easily seen by thinking in terms of
filtrations and decompositions of $k^n$ into sums
of lines, as above). Since $N(D)/D$ is discrete, we see that $T \mapsto w$ must
in fact be constant, and so $w$ is uniquely determined just by $g B g^{-1}$ alone.
In other words, the various double cosets $B w B$ are disjoint.

The preceding discussion is a litte long, since I've tried to explain (in the
particular special cases under consideration) some general facts about conjugacy
of maximal tori in algebraic groups, using the translation of group theoretic
facts about $G$, $B$, etc., into linear algebraic statements about $k^n$.
Nevertheless, I believe that this is the standard proof of the Bruhat decomposition,
and explains why it is true: the relative position of two flags is described by an element of the Weyl group.

While I basically agree with Kevin Buzzard that this is something to find in a textbook rather than on mathoverflow, I'll take the opportunity to give a totally nonstandard description, inspired by Shizuo Zhang's comment.

Given an action of the circle group $S = {\mathbb G}m$ on a smooth variety $X$, with isolated fixed points $X^S$, we can define a Bialynicki-Birula decomposition
$$X = \coprod_{f\in X^S} X_f, \qquad X_f := \{ x \in X : \lim_{z\to 0} S(z)\cdot x = f \}.$$
Part of B-B's theorem is that each $X_f$ is a copy of affine space.

If $Y \subseteq X$ is $S$-invariant, then $Y$ acquires a similar decomposition, and $Y_f = X_f \cap Y$ for each $f\in Y^S \subseteq X^S$ (very easy to prove).

Consider the embedding $Y := GL_n/B = Flags(n) \to \prod_{k=1}^n Gr(k,n) \to \prod_{k=1}^n {\mathbb P}(Alt^k\ {\mathbb C}^n) =: X$, where the second map is made of Plucker embeddings, and take $S$ acting on ${\mathbb C}^n$ by $z\mapsto diag(z,z^2,z^3,\ldots,z^n)$, AKA the $\check\rho$ coweight. Then its fixed points on each ${\mathbb P}(Alt^k\ {\mathbb C}^n)$ are indexed by $k$-element subsets of $1\ldots n$. So $X^S$ is lists of subsets, and $Y^S$ is increasing lists of subsets, or equivalently permutations.

Ergo, there exists a decomposition of $GL_n/B$ into affine spaces, indexed by permutations. (It's not obvious from this description that they are the $B$-orbits, but maybe that's okay, since more spaces have these $S$-actions than have finitely many $B$-orbits.)

Here is some standard machinery generalizing this result, which is more combinatorial than the standard reductive-groups proofs (but gives far less understanding than e.g. Matt Emerton's explanation above).

Assume $G$ acts strongly transitively on a thick building $\Delta$. Let $B$ be the stabilizer of the fundamental chamber, $N$ the stabilizer of the fundamental apartment, $T$ the subgroup fixing the fundamental apartment, and $W$ the quotient $N/T$. Then $(G,B,N)$ is a $BN$-pair, also called a Tits system, for $G$. In particular, you have the Bruhat decomposition $G=\coprod_{w\in W}BwB$, plus lots more.

In this case, take $G$ to be $GL(V)$, and take $\Delta$ to be the flag complex of subspaces of $V$. The stabilizer $B$ of the fundamental chamber is the upper-triangular matrices. The stabilizer $N$ of the fundamental apartment is the monomial matrices; the subgroup $T$ fixing the fundamental apartment is the diagonal matrices; and the Weyl group $W$ is the quotient $N/T$, which can be identified with the permutation matrices.

You can find all the above, including proofs, in Chapter V.2 of "Buildings" by Brown. For the special case of $GL(V)$, you could also look at Exercises 7 and 8 in Chapter 2.4 of "Groups and Representations" by Alperin-Bell.

My answer starts off just like Emerton's answer above; you want the $G$-orbits on $G/B\times G/B$. But now, I diverge from Emerton to say that $G/B$ is the space of full flags $F_0\subset F_1\subset \dotsb F_n$, where $F_i$ is of dimension $i$. Our problem is to determine the number of orbits of pairs of flags under simultaneous translation in $G$.

Suppose $F_\bullet$ and $F'_\bullet$ are two such flags, then one can first show that the dimensions $\dim(F_i\cap F'_j)$ completely determine the orbit of the pair; for if $E_\bullet$ and $E'_\bullet$ are another pair of flags such that $E_i\cap E'_j$ has the same dimension as $F_i\cap F'_j$ for all $i$ and $j$, then it is possible to construct an element $g\in GL_n$ which takes $F$ to $E$ and $F'$ to $E'$ (for example, by choosing suitable bases).

Write $d_{ij}$ for $\dim E_i\cap E'_j$. Let $w_{ij}=d_{ij}-d_{i-1,j}-d_{i,j-1}+d_{i-1,j-1}$. One may show that $w_{ij}$ is a permutation matrix and that the $d_{ij}$'s can be recovered from $w_{ij}$'s (actually, $w_{ij}$ keeps track of when the jump from $0$ to $1$ happens in the filtration of $E_i/E_{i-1}$ induced by $E'_\bullet$). Also, every permutation matrix arises in this way, for example, from the pair $(E_\bullet, w\cdot E_\bullet)$, where we now think of the permutation matrix as an element of $GL_n$.

One can prove the Bruhat decomposition by applying the theorem of Jordan-Hölder.
This theorem shows that two chains of submodules of a module of finite length
whose successive quotients
are simple have the same lengths, and that the same quotients appear.
But it is slightly more precise, because it gives a precise recipe
for a bijection between the two lists. Here we apply it for modules of finite length over a field,
aka finitely dimensional vector spaces.

Let $E=(e_1,\dots,e_n)$ and $F=(f_1,\dots,f_n)$ be two bases of a vector space $M$
over a field $K$.

For $0\leq i\leq n$, define
$M_i=\langle e_1,\dots,e_i\rangle $ and $N_i=\langle f_1,\dots,f_i\rangle$.
The proof of the theorem of Jordan-Hölder furnishes
a (unique) permutation $\sigma$ of $\{1,\dots,n\}$
such that $M_{i-1}+M_i\cap N_{\sigma(i)-1}=M_{i-1}$
and $M_{i-1}+M_i\cap N_{\sigma(i)}=M_i$,
for every $i\in\{1,\dots,n\}$.

For any $i$, let $x_i$
be a vector belonging to $M_{i}\cap N_{\sigma(i)}$
but not to $M_{i-1}$.
For every $i$, one has $\langle x_1,\dots,x_i\rangle =M_i$;
it follows that $X=(x_1,\dots,x_n)$ is a basis of $M$;
moreover, there exists a matrix $B_1$, in upper triangular form,
such that $X=E B_1$.

Set $\tau=\sigma^{-1}$.
Similarly, one has $\langle x_{\tau(1)},\dots,x_{\tau(i)}\rangle=N_i$
for every $i$. Consequently, there exists a matrix $B_2$,
still in upper-triangular form, such that
$(x_{\tau(1)},\dots,x_{\tau(n)})=F B_2$.

Let $P_\tau$ be the permutation matrix associated to $\tau$,
we have $(x_{\tau(1)},\dots,x_{\tau(n)})=(x_1,\dots,x_n)P_\tau$.
This implies that $FB_2=EB_1P_\tau$, hence
$F=E B_1 P_{\tau} B_2^{-1} $.
Therefore, the matrix $A=B_1P_{\tau}P_2^{-1}$
that expresses the coordinates of the vectors of $F$
in the basis $E$
is the product of an upper-triangular matrix, a permutation matrix
and another upper-triangular matrix.

In the group $\mathop{\rm GL}(n,K)$, let $B$ be the subgroup consisting
of upper-triangular matrices, and let $W$ be the subgroup
consisting of permutation matrices.
We have proved that $\mathop{\rm GL}(n,K)=BWB$: this is precisely the Bruhat decomposition.

As mentioned in Kevin's comments, the standard proof for reductive G is in any of the standard texts on reductive groups. One slick way to go about it is to show that you have a Tits system/BN pair, after which the Bruhat decomposition is known to fall out. If G=GL_n, then it is not hard to show that this is the case, while for more general groups, one still must require some development of the structure theory of reductive groups.