Computing the eigenvalues or singular values of a matrix, as a general rule,
boils down to an iterative procedure. Naive applications of this iterative
algorithm would often both require \(O(n^4)\) work for \(n \times n\)
matrices and be less numerically stable than first spending \(O(n^3)\) work
to condense the matrix to a form where each of roughly \(O(n)\) iterations
requires at most \(O(n^2)\) work. In the case of a full eigenvalue
decomposition of a Hermitian matrix, a similarity transformation composed of
Householder transformations is applied to condense the matrix down to
real symmetric tridiagonal form, where an iterative algorithm can be quickly
(read: in at most cubic time) applied.
Similarly, it is standard practice to condense a general matrix to bidiagonal
form in order to compute its Singular Value Decomposition, and to reduce a
general square matrix to upper Hessenberg form in order to compute its Schur
decomposition. In each of these cases, it is possible cast a significant portion
of the reduction to condensed form into high-performance
matrix-matrix multiplications [DSH1989].