For small training sets, you can quickly apply deep learning by performing transfer learning with pretrained deep network models (GoogLeNet, AlexNet, VGG-16, and VGG-19) and models from the Caffe Model Zoo.

To speed up training on large data sets, you can distribute computations and data across multicore processors and GPUs on the desktop (with Parallel Computing Toolbox™), or scale up to clusters and clouds, including Amazon EC2® P2 GPU instances (with MATLAB Distributed Computing Server™ ).