I am doing work which requires as fast matrix multiplication as possible and just want to double-check with this community that the Winograd variant of Strassen's MM algorithm is the fastest practical (so no Coppersmith-Winograd) algorithm.

I am doing a lot of processing with the data before, between, and after each MM to the point where using Mathematica or Matlab would be a hindrance.

Also, I am curious if anyone has a good idea for the error in the regular Strassen vs Winograd variant? In "Exploiting Parallelism in Matrix Computation Kernels for SMP systems" D'Alberto et al. briefly mention Strassen as being more accurate but this seems counter-intuitive since Winograd has less operations overall.

Edit: We are using matrices up to size 2^16 x 2^16 ~ 4 billion doubles, so a sub-cubic algorithm is definitely faster than naive.

Edit 2: On the accuracy of Strassen vs Winograd, if anyone is interested. In "Accuracy and Stability of Numerical Algorithms" Higham has an in-depth analysis of the error of the two algorithms and shows Strassen has slower error growth w/respect to size of matrices. Also of note, Strassen more error-prone (against itself) for matrices with all positive entries.

$\begingroup$How large are your matrices? Strassen is more efficient than the "naive" approach to matrix-matrix multiplication asymptotically, but the matrix must be quite large for the Strassen algorithm to be faster.$\endgroup$
– Wolfgang BangerthFeb 8 '18 at 20:35

4

$\begingroup$Thanks for the response. We built a computer specifically for this so we can go as big as 2^16 x 2^16 ~4billion doubles which takes up most of our 128gb of RAM. It gets better than naive at about 2^10 x 2^10.$\endgroup$
– kreitzFeb 8 '18 at 21:33

$\begingroup$I guess these are dense matrices containing real-world data? If they are mathematical/numerical entities with some structure, there may be much faster ways of obtaining (approximate) MM products (e.g., a hierarchical representation).$\endgroup$
– David KetchesonFeb 11 '18 at 19:09