I've only ever heard it described as "huge", and a google search turned up nothing.

Also, given that the Strassen algorithm has a significantly greater constant than Gaussian Elimination, and that Coppersmith-Winograd is greater still, are there any indications of what constant an O(n^2) matrix multiplication algorithm might have?

2 Answers
2

In your second question, I think you mean "naive matrix multiplication", not "Gaussian elimination".

Henry Cohn et al had a cute paper that relates fast matrix multiply algorithms to certain groups. It doesn't do much for answering your question (unless you want to go and prove the conjectured results =), but it's a fun read.

Also, to back up harrison, I don't think that anyone really believes that there's an $O(n^2)$ algorithm. A fair number of people believe that there is likely to be an algorithm which is $O(n^{2+\epsilon})$ for any $\epsilon > 0$. An $O(n^2 \log n)$ algorithm would fit the bill.

edit: You can get a back-of-the-envelope feeling for a lower bound on the exponent of Coppersmith-Winograd based on the fact that people don't use it, even for n on the order of 10,000; naive matrix multiplication requires $2n^3 + O(n^2)$ flops, and Coppersmith-Winograd requires $Cn^{2.376} + O(n^2)$. Setting the expressions equal and solving for $C$ gives that the two algorithms would have equal performance for n = 10,000 (ignoring memory access patterns, implementation efficiency, and all sorts of other things) if the constant were about 627. In reality, it's likely much larger.

In answer to the second part of your question, I think the conventional wisdom is that there isn't a O(n^2) algorithm; analogously to the case for integer multiplication, you shouldn't be able to do better than about O(n^2 log n). (Raz has shown that this is a lower bound in the arithmetic circuits with bounded coefficients model.)

What's the implied constant there? Probably just "huge." As far as I know, the reason that people believe that we can achieve close to O(n^2) is basically by analogy with integer multiplication, so if you want some grasp on the constants it might be worthwhile to look at the constants in FFT multiplication.

Incidentally, has the appropriate volume of Art of Computer Programming been released, or will it be soon? I know Knuth's a stickler for including these kinds of details, so that might be the most obvious reference apart from the original paper...