The use of GPUs for processing large sets of parallelizable data has increased sharply in recent years. As the concept of GPU computing is still relatively young, parameters other than computation time, such as energy efficiency, are being overlooked. Two parallel computing platforms, CUDA and OpenCL, provide developers with an interface that they can use to work directly with GPUs. CUDA is designed specifically for NVIDA GPUs, while OpenCL can be used with any GPUs, as well as CPUs and FPGAs. In this paper, we analyze the energy efficiency of the two platforms using large matrix multiplication applications as our basis of comparison. We found that CUDA expends less energy over a shorter time than OpenCL when given the same computational workload.