and in Section 7 the following technical lemma is given (Lemma 7.5 in the paper)

Lemma (paraphrased) Let $S$ be the set of non-empty subsets of some fixed finite set $F$, and consider the matrix $A:S\times S\to {\mathbb Q}$ where
$$ A_{I,J} = 1 \hbox{ if $I\cap J\neq \emptyset$, and }
A_{I,J} = 0 \hbox{ if $I\cap J = \emptyset$.} $$
Then $A$ is invertible.

Selim gives a proof by induction that the columns of $A$ are linearly independent, but he says "we could not find a particularly enlightening proof". So my question is this: do we have a more conceptual argument to show this (real, symmetric) matrix is invertible?

[EDIT/UPDATE 2012-03-07: this was poorly phrased on my part; I was hoping to find some explanation that involved the lattice or group stucture on $\{0,1\}$, and which took advantage of the very particular structure of this matrix, although I am grateful for all answers received so far. In some sense I wanted to know: "what is the pattern?" or "what is the underlying algebraic mechanism?" -- the matrix is defined in terms of some incidence or order structure, so does that give some way to interpret invertibility of this matrix as part of a more general result? (I do not mean a result like "a matrix with non-zero determinant is invertible".)

Benjamin Steinberg's answer comes closest, at present, to what I was hoping for, but Benjamin Young's answer is also very suggestive and helpful.

I suspect this will be routine for several MO regulars, but hope it is not too elementary or "too localized".

[older comments/thoughts, left here for context]

My vague thoughts are that one could view $A$ as the corner of a square matrix indicated by the power set of $F$, and then perhaps do some kind of Fourier transform on the group $\{0,1\}^{|F|}$. Or perhaps there is some kind of Möbius inversion at work here?

While I'm here, a question on terminology: the matrix $A$ is of course the adjacency matrix of a certain graph whose vertex set is $S$. Does this graph have an established name?

In terms of the name, an "intersection graph" is a more general term for a graph formed on a collection of subsets by connecting two subsets if they have non-empty intersection. I'm not sure if there's a special name for the graph in the case where $S$ is all the non-empty subsets of $F$.
–
Kevin P. CostelloMar 6 '13 at 0:35

1

This is proved in Them Linear Representations of Semigroups of Boolean Matrices by Ki Hang Kim and Fred W. Roush jstor.org/stable/2041789, where they prove invertibility over Z via an order argument that I believe can be turned into a Mobius inversion argument. In fact, I know Mohan Putcha once emailed me a proof using Mobius inversion which I will try and find when I get a chance to get into my old Carleton email. The theorem is actually proving that a certain Munn algebra is isomorphic to a matrix algebra but the matrix you write above is the sandwich matrix of the J-class
–
Benjamin SteinbergMar 6 '13 at 1:02

ctd... and since you know semigroup theory you can easily make the translation.
–
Benjamin SteinbergMar 6 '13 at 1:03

I have ended up accepting Benjamin Steinberg's answer but if I could have chosen more, I would
–
Yemon ChoiMar 8 '13 at 10:08

6 Answers
6

The argument of Kim and Roush looks as follows after translating out the semigroup theory (and is essentially using a Mobius inversion idea).

Let $T\colon \mathbb Z^S\to \mathbb Z^S$ be the group homomorphism corresponding to left multiplication by $A$. We show that in appropriate bases for the domain and codomain the matrix of $T$ is triangular with 1s on the diagonal and hence $A$ is invertible over $\mathbb Z$. Let $e_X$ be the unit vector corresponding to a non-empty subset $X$ of $F$. Put $e_{\emptyset}=0$ for convenience. Let $b_X=e_F-e_{X^c}$ where $X^c$ is the complement of $X$. Notice that $b_F=e_F$ and hence the $b_X$ form a basis for $\mathbb Z^S$.

Now one computes $$Ab_X=A(e_F-e_{X^c})=\sum_{Y\subseteq X} e_Y.$$ If we use the $b_X$ with $X\in S$ as a basis for the domain of $T$, the $e_X$ with $X\in S$ as a basis for the codomain and total order $S$ by a topological sorting of $\subseteq$ then the matrix for $T$ with respect to these bases is triangular with 1s on the diagonal. Thus $A$ is invertible over $\mathbb Z$.

The matrix for $A$ in these bases is in fact the incidence matrix of the poset $(S,\subseteq)$.
–
Benjamin SteinbergMar 6 '13 at 4:28

From this proof it is fairly easy to compute the inverse. The inverse of the incidence matrix of the poset is the Mobius function which for the subset poset is very simple. The change of basis matrix for the basis change above is also very simple. Then one just has to multiply it out.
–
Benjamin SteinbergMar 8 '13 at 0:53

Thanks - I have been too busy to digest all the answers, but with hindsight this kind of M&oouml;bius inversion comes closest to what I was hoping to hear.
–
Yemon ChoiMar 8 '13 at 2:00

The high level description is that the transformation A sends a subset to the sum of all susets minus the sum of all subsets of the complement of the subset and so this is essentially triangular with respect to the order. My base change removes the "essentially" part.
–
Benjamin SteinbergMar 8 '13 at 3:20

This argument is motivated by some of the ideas in this paper of Dowling and Wilson --I think it may also be possible to extract the result directly from that paper somehow.

Let $A'$ be formed by $A$ by adding an additional row and column of $0's$ to represent the empty set, and let $J$ be the $2^n \times 2^n$ matrix of all $1's$. Then $J-A'$ can be thought of as the graph on all $2^n$ vertices where we connect two sets if they are disjoint. We have
$$det(J-A')=\sum_{\sigma} (-1)^{sgn(\sigma)},$$
where the sum is taken over all permutations such that $\sigma(I) \cap I=0$ for all subsets $I$. But the only such permutation is the one where $\sigma(I)$ is the complement of $I$ (if you assign the sets from largest cardinality to smallest, at each step there's only one choice for $\sigma(I)$).

Since $J-A'$ has full rank and $J$ has rank $1$, then $A'=J-(J-A')$ has rank at least $2^n-1$. Dropping the row and column of $0's$, we have that $A$ has full rank.

Depends on what you call enlightening. I've got a different viewpoint on this than most other mathematicians I've met.

To prove that this matrix A is invertible, you should guess its inverse M explicitly, and then prove that AM=I. This is certainly enough to prove that it's invertible! It's also potentially enlightening (or at least interesting) because now you get to try and think of an interpretation for the elements of the inverse.

Anyway, the point is that the guessing part is really, really easy in this instance, because there's an obvious structure in the inverse of the matrix. Here's the inverse for n=3, computed in sage:

That is, let A(n) be the matrix for sets of size n, where the rows and columns are in lex order, and M(n) be its inverse. Then conjecturally M(n) has the following block structure:

[ 0 -v' M(n-1) ]
[ -v 0 v ]
[ M(n-1) v' -M(n-1) ]

where v is the vector [0, 0, ... 0, 1]. I'm pretty sure it'd be easy to prove this inductively, as A itself has a similar block structure - though I confess I haven't done it.

EDIT: Here's the sage code that produces the matrix. Obviously it's not the smartest way to go about doing things, but it was adequate. If anyone knows a smarter but equally terse way of iterating over the power set than converting it to a list, let me know!

All informative answers are welcome. Note that I wasn't the person who used the word "enlightening" ;-)
–
Yemon ChoiMar 6 '13 at 6:10

My genuine thanks, by the way: I admit to being too technologically incompetent/lazy to stick the matrix into a CAS (more fool me), and as you observe the formula for the inverse is very suggestive. What it suggests to me is that this was indeed Mobius inversion ;-)
–
Yemon ChoiMar 8 '13 at 2:02

For a vast generalization, see Exercise 3.96(a) of Enumerative Combinatorics, vol. 1, second ed. To get the posted problem, take $L$ to be the boolean algebra of all subsets of $F$ (ordered by inclusion), and set $F(u,s)=1$ if $u\neq\emptyset$ or $s=\emptyset$, and otherwise $F(u,s)=0$. (Note that I am using $F$ in two different ways: one is Yemen's use, and the other is the use in EC1.) Then in the row of the matrix $F(s\wedge t,s)$ indexed by $\emptyset$, every entry is 0 except in the column indexed by $\emptyset$.
Hence the determinant remains the same if we remove the row and column and indexed by $\emptyset$, but this gives the matrix $A$.

I am curious, since it turned out so nicely here: In the general setting, is the matrix inverse known?
–
Benjamin YoungMar 8 '13 at 4:08

1

The solution to Exercise 3.96(a) shows how the matrix $M=F(s\wedge t,s)$ can be triangularized, from which a recurrence for the entries of $M$ follows. I believe this is essentially what is done in Section 6 of arXiv:1110.4954, though I haven't looked at this very carefully. It would be interesting to find an explicit formula for the entries of $M^{-1}$.
–
Richard StanleyMar 8 '13 at 20:48

For any $i,j,k$, the automorphism group of $A$ is transitive on the set of pairs $(I,J)$ such that $|I|=i, |J|=j, |I\cap J|=k$. Therefore the same is true of the inverse (if it exists). That is, the $(I,J)$-th entry of the inverse is $f(i,j,k)$ for some function $f$. I'm too lazy, but I bet that by examining Benjamin's example the function $f(i,j,k)$ can be guessed rather easily. Then we will have an explicit formula for the inverse.

Here's a WRONG guess: The $(I,J)$-th entry of the inverse is 0 unless $|I\cup J|=n$ and otherwise is $(-1)^{n+k+1}$.

Here's a RIGHT guess: The $(I,J)$-th entry of the inverse is 0 if $|I\cup J|\lt n$ and otherwise equals $(-1)^{k+1}$. I checked this up to n=8.

This is easy to prove by induction using Benjamin's recursive formula for the inverse.

After seeing very good proofs of this, I could not think other ways to prove than using induction.
I read O. Selim's proof, and I think it is possible to simplify their induction argument.

We can associate each subset of $S$ to a binary expansion so that natural numbers from $0$ to $2^n-1$ will represent all subsets of $S$. The components of $2^n \times 2^n$ matrix $A_n$ where $|S|=n$ is then
$$
A_{ij}= 1 \textrm{ if the binary expansions of $i$ and $j$ has 1 in common at some digit}
$$
$$A_{ij}=0 \textrm{ otherwise}
$$
where $i,j = 0, 1, \cdots , 2^n-1$.
So, this matrix is basically one column and one row of zeros added to your original matrix, this does not change the rank.

Let $E_n$ be the $2^n\times 2^n$ matrix with all 1's.
Then we have the following block matrix form
$$
A_{n+1}=\begin{pmatrix}{A_n}&{A_n}\\\
{A_n}&{E_n}
\end{pmatrix}, \\ E_{n+1}-A_{n+1}=\begin{pmatrix}{E_n-A_n}&{E_n-A_n}\\\
{E_n-A_n}&{0}\end{pmatrix}
$$
We assume our induction hypothesis
$$
\textrm{rank}A_n=2^n-1, \\ \textrm{rank}(E_n-A_n)=2^n
$$
After elementary row and column operations, we have
$$
\textrm{rank}A_{n+1}=\textrm{rank}\begin{pmatrix}{A_n}&{0}\\\
{0}&{E_n-A_n}
\end{pmatrix}, \\\ \textrm{rank}(E_{n+1}-A_{n+1})=\textrm{rank}\begin{pmatrix}{0}&{E_n-A_n}\\\
{E_n-A_n}&{0}
\end{pmatrix}
$$
Then we have
$$
\textrm{rank}A_{n+1}=2^{n+1}-1, \\ \textrm{rank}(E_{n+1}-A_{n+1})=2^{n+1}
$$
Added) This method might also work for finding inverse matrix. Then we have to consider $n-1\times n-1$ minor of $A_n$ with row and column of all zeros deleted.

I explicitly said I wanted a more "conceptual" argument. Of course induction is involved somewhere, but what I wanted was some explanation of how this example might fit into a bigger picture
–
Yemon ChoiMar 7 '13 at 19:00

1

The word "conceptual" is really vague in your original posting. What I understand is that my solution is involving the "concept" of "elementary row and column operations don't change the rank of matrix". Here is what I see: Kevin's answer is involving "invertible <=> nonzero determinant" Benjamin Steinberg's answer is involving "change of basis" Benjamin Young's answer is involving "the very definition of invertibility". So, I don't see why mine is specifically what you don't want to see.
–
i707107Mar 7 '13 at 19:48

I gave this a +1. It is no less conceptual than computing the inverse by induction and in a sense it is very much like the first answer except without determinants. I liked all the answers.
–
Benjamin SteinbergMar 8 '13 at 0:54

Apologies for vagueness, your comment is a fair one. I have updated the question with an attempt to clarify.
–
Yemon ChoiMar 8 '13 at 1:58

1

With help of Benjamin Young's and Brendan McKay's answers, I found that Brendan's guess is true without $n$ on the exponent. Then tried understanding the combinatorial meaning behind it. The formula $AM=I$ is essentially $(1-1)^m=0$ for $m>0$.
–
i707107Mar 9 '13 at 1:31