For small training sets, you can perform transfer learning with pretrained deep network models (including SqueezeNet, Inception-v3, ResNet-101, GoogLeNet, and VGG-19) and models imported from TensorFlow™-Keras and Caffe.

To speed up training on large datasets, you can distribute computations and data across multicore processors and GPUs on the desktop (with Parallel Computing Toolbox™), or scale up to clusters and clouds, including Amazon EC2® P2, P3, and G3 GPU instances (with MATLAB Distributed Computing Server™).

Network Architectures

Use various network structures such as series, directed acyclic graph (DAG), and recurrent architectures to build your deep learning network. DAG architectures offer more network topologies including those with skipped layers or layers connected in parallel.

Transfer Learning and Pretrained Models

Import pretrained models into MATLAB for inference.

Transfer Learning

Transfer learning is commonly used in deep learning applications. Access a pretrained network and use it as a starting point to learn a new task and quickly transfer learned features to a new task using a smaller number of training images.

Pretrained Models

Access the latest models from research with a single line of code. Import pretrained models including AlexNet, GoogLeNet, VGG-16, VGG-19, ResNet-101, Inception-v3, and SqueezeNet. See pretrained models for a complete list of models.

Network Activations

Extract activations corresponding to a layer, visualize the learned features, and train a machine learning classifier using the activations. Use thedeepDreamImage function to understand and diagnose network behavior by synthesizing images that strongly activate network layers and highlighting the learned features.

Training Acceleration

GPU Acceleration

Speed up deep learning training and inference with high-performance NVIDIA® GPUs. You can perform training on a single workstation GPU or scale to multiple GPUs with DGX systems in data centers or on the cloud. You can use MATLAB with Parallel Computing Toolbox and most CUDA® enabled NVIDIA GPUs that have compute capability 3.0 or higher.

MATLAB Compiler Support

Use MATLAB Compiler™ and MATLAB Compiler SDK™ to deploy trained networks as C/C++ shared libraries, Microsoft® .NET assemblies, Java® classes, and Python® packages from MATLAB programs. You can also train a shallow network model in the deployed application or component.

Sharing standalone MATLAB programs with MATLAB Compiler.

Shallow Neural Networks

Use neural networks with a variety of supervised and unsupervised shallow neural network architectures.

Stacked Autoencoders

Perform unsupervised feature transformation by extracting low-dimensional features from your data set using autoencoders. You can also use stacked autoencoders for supervised learning by training and stacking multiple encoders.