Where the adjacency matrix performs poorly is with the
and
operations. To implement these, we must scan all
entries in the corresponding row or column of
and gather up all the
indices,
, where
, respectively
, is true.

Another drawback of the adjacency matrix representation is that it
is big. It stores an
boolean matrix, so it requires at
least
bits of memory. The implementation here uses a matrix
of
values so it actually uses on the
order of
bytes of memory. A more careful implementation, that
packs
boolean values into each word of memory could reduce this
space usage to
words of memory.

Despite the high memory usage and poor performance of the
and
operations, an AdjacencyMatrix can still be useful for
some applications. In particular, when the graph is dense,
i.e., it has close to
edges, then a memory usage of
may be acceptable.

The AdjacencyMatrix data structure is also commonly used because
algebraic operations on the matrix
can be used to efficiently compute
properties of the graph . This is a topic for a course on algorithms,
but we point out one such property here: If we treat the entries of
as integers (1 for
and 0 for
) and multiply
by itself
using matrix multiplication then we get the matrix
. Recall,
from the definition of matrix multiplication, that

Interpreting this sum in terms of the graph , this formula counts the
number of vertices,
, such that contains both edges
and
. That is, it counts the number of paths from
to
(through intermediate vertices,
) that have length exactly 2.
This observation is the foundation of an algorithm that computes the
shortest paths between all pairs of vertices in using only
matrix multiplications.