For small training sets, you can quickly apply deep learning by performing transfer learning with pretrained deep networks. To speed up training on large data sets, you can use Parallel Computing Toolbox™ to distribute computations and data across multicore processors and GPUs on the desktop, and you can scale up to clusters and clouds (including Amazon EC2® P2 GPU instances) with MATLAB Distributed Computing Server™.