Markovian models can generate very large sparse matrices, which are difficult to store and solve. A useful method for finding transient probabilities in Markovian models is the uniformization. The aim of this paper is to show that the performance of the uniformization can be improved using multiGPU architecture. We propose partitioning scheme for HYB sparse matrix storage format and some optimization techniques adjusted so as to minimize communication between GPUs during iterative sparse matrix-vector multiplication, which is the most time consuming step. The results of experiments show that on multi-GPU we can solve larger matrices than on single device and accelerate computations in comparison to a multithreaded CPU. Computational test have been carried out in double precision for a wireless network models. Using multi-GPU we were able to solve model which is described by a matrix of the size 3.6×10^7.