Abstract:-matrices, as they were introduced in previous papers, allow the
usage of the common matrix arithmetic in an efficient, almost optimal
way. This article is concerned with the parallelisation of this arithmetics,
in particular matrix building, matrix-vector multiplication, matrix
multiplication and matrix inversion.

Of special interest is the design of algorithms, which reuse as much as
possible of the corresponding sequential methods, thereby keeping the effort
to update an existing implementation at a minimum. This could be achieved by
making use of the properties of shared memory systems as they are widely
available in the form of workstations or compute servers. These systems
provide a simple and commonly supported programming interface in the form of
POSIX-Threads.

The theoretical results for the parallel algorithms are tested with numerical
examples from BEM and FEM applications.