Cupboard of matrices looking a little... sparse? Have this delicious Taco

Compsci boffins' open-source tool could massively speed up AI development

A bunch of clever folks have created an open-source compiler, dubbed Taco, that generates code optimized for performing calculations on sparse matrices – a useful but tricky concept in computer science.

From studying interactions in particle physics and ranking search engine results, to training artificial intelligence systems, you'll encounter matrices at some point. In non-trivial projects and applications, these data structures will be filled with lots of blank spaces, usually zeroes, so they are typically reduced to sparse matrices to save memory and optimize calculations performed on them. After all, if the array is mostly zeroes, you can skip most elements.

Running operations efficiently on these compressed matrices is a little more involved than you might imagine, though.

There are various solutions out there. Now, though, a team of computer scientists from MIT, Adobe Research, French Alternative Energies and the US Atomic Energy Commission, has come up with a way to automate the process of efficiently crunching numbers in sparse matrices. This approach results in a potential 100-times speedup over today's software, allegedly.

A paper presented at the SPLASH (Systems, Programming, Languages and Applications: Software for Humanity) conference last week described the aforementioned Taco, a tensor algebra compiler.

Vectors and matrices fall under the tensor category. Tensors are multidimensional arrays of numbers. Vectors are flat and have one dimension, wheres matrices are two-dimensional.

Saman Amarasinghe, senior author of the paper and a computer science professor at MIT, said: “Sparse representations have been there for more than 60 years. But nobody knew how to generate code for them automatically. People figured out a few very specific operations — sparse matrix-vector multiply, sparse matrix-vector multiply plus a vector, sparse matrix-matrix multiply, sparse matrix-matrix-matrix multiply. The biggest contribution we make is the ability to generate code for any tensor-algebra expression when the matrices are sparse.”

AI is worth learning and not too tricky – if you can get your head around key frameworks

The functions for the different tensor operations are known as “kernels”. Large datasets that include millions of training examples in AI, for instance, can involve many different sequences of tensor operations which requires a unique kernel for each operation.

Taco comes up with the kernel code automatically. “The programmer just has to give Taco the size of the tensor they’re interested in and say whether it's sparse or dense, and Taco will write you the kernel in C,” Fredrik Kjolstad, coauthor of the paper and a PhD student at MIT, explained to The Register this week. Here's an online demo of Taco to see what Kjolstad means.

Taco works out which pair of calculations between the tensors involves a zero and discards them. A tensor that maps a dataset of Amazon customer ID numbers to the items purchased and descriptions of the items used in the reviews is a whopping 107 exabytes (1018) – about ten times as much as the estimated storage capacity of all of Google’s servers. Taco would, we're told, reduce this all down to just 13 gigabytes.

The researchers have also tested Taco against some well-known handcoded kernels and found that it performs just as well or even better, Kjolstad said.

So far the team have only experimented with running optimized code on CPU cores. But where Taco could potentially make the biggest impact in AI is dealing with the numerous matrix computations in neural networks executed on GPUs.

It’s a future area of study, and the researchers are talking to Nvidia about working together on that effort.

Taco could make artificially intelligent software work faster, and be more efficient and cheaper to train. It will also be interesting to see if systems will be easier to develop since some parts of the code will be automated. ®

You can play around with the MIT-licensed code here on GitHub, or here in your browser.