All,I just downloaded Magma 1.2.1. I always had a notion that MAGMA is about accelerating Lacpack/blas routines on all CPU and GPU cores available in the system. However, 1.2.1. Release notes begins with:" MAGMA Release Notes

-----------------------------------------------------

MAGMA is intended for a single CUDA enabled NVIDIA GPU. It supportsTesla and Fermi GPUs. For more details see the MAGMA 1.0 presentation....."Now, Does not MAGMA support multi-GPUs? In particular, I am looking @ LU dense Factorization, Matrix inversion, SVDThanks for any help,Best Regards,Kuruvinandan

Yes, MAGMA supports multiple GPUs for certain routines. Currently, the release has multi-GPU support for LU, Cholesky, QR, as noted later in the release notes. Multi-GPU eigenvalue routines are also under development. I will update that first sentence of the release notes, though. Thanks.

mgates3 wrote:Yes, MAGMA supports multiple GPUs for certain routines. Currently, the release has multi-GPU support for LU, Cholesky, QR, as noted later in the release notes. Multi-GPU eigenvalue routines are also under development. I will update that first sentence of the release notes, though. Thanks.

-mark

Hi,I'd like to know if Magma has multi-gpu support for matrix-matrix product (SGEMM).Presently I use Cublas, but in Cublas sgemm is only mono-gpu.Thank you

No, a multi-GPU GEMM is not specifically provided in MAGMA. What size and distribution of the A, B, C matrices are you looking for?

Internally we have effectively done multi-GPU GEMM in several codes, based on cublas GEMM. The implementation depends on how your data is distributed. For C = A*B, the easiest is if A is duplicated on all GPUs, while B and C are distributed by block columns, then

They are not implemented as separate functions, but are part of the multi-GPU codes such as LU and QR, so I don't think they will be much use for you. For instance, see zgetrf_mgpu.cpp, the loop that ends with "end of gpu updates". -mark