We present a novel implementation of a dense, square, non-structured matrix factorization algorithm, namely the WZ factorization – with the use of graphics processors (GPUs) and CPUs to gain a high performance at a low cost. We rewrite this factorization as operations on blocks of matrices and vectors. We have implemented our block-vector algorithm on GPUs with the use of an appropriate (and ready-to-use) GPU-accelerated mathematical library, namely the CUBLAS library. We compared the performance of our algorithm with CPU implementations. In particular, our implementation on an NVIDIA Tesla C2050 GPU outperforms a CPU-based implementation. Our results show that the algorithm scales well with the size of matrices; moreover, the larger the matrix, the better the performance. We also discuss the impact of the size of the matrix and the use of ready-to-use mathematical libraries on the numerical accuracy.