You can exchange models with TensorFlow™ and PyTorch through the ONNX format and import models from TensorFlow-Keras and Caffe. The toolbox supports transfer learning with a library of pretrained models (including NASNet, SqueezeNet, Inception-v3, and ResNet-101).

You can speed up training on a single- or multiple-GPU workstation (with Parallel Computing Toolbox™), or scale up to clusters and clouds, including NVIDIA® GPU Cloud and Amazon EC2® GPU instances (with MATLAB Parallel ServerTM).

Network Activations

Extract activations corresponding to a layer, visualize the learned features, and train a machine learning classifier using the activations. Use the Grad-CAM approach to understand why a deep learning network makes its classification decisions.

Framework Interoperability

Interoperate with deep learning frameworks from MATLAB.

ONNX Converter

Import and export ONNX models within MATLAB® for interoperability with other deep learning frameworks. ONNX enables models to be trained in one framework and transferred to another for inference. Use GPU Coder™ to generate optimized CUDA code and use MATLAB Coder™ to generate C/C++ code for the importer model.

TensorFlow-Keras Importer

Import models from TensorFlow-Keras into MATLAB for inference and transfer learning. Use GPU Coder to generate optimized CUDA code and use MATLAB Coder to generate C and C++ code for the importer model.

Training Acceleration

GPU Acceleration

Speed up deep learning training and inference with high-performance NVIDIA GPUs. Perform training on a single workstation GPU or scale to multiple GPUs with DGX systems in data centers or on the cloud. You can use MATLAB with Parallel Computing Toolbox and most CUDA® enabled NVIDIA GPUs that have compute capability 3.0 or higher.

Unsupervised Networks

Find relationships within data and automatically define classification schemes by letting the shallow network continually adjust itself to new inputs. Use self-organizing, unsupervised networks as well as competitive layers and self-organizing maps.

Stacked Autoencoders

Perform unsupervised feature transformation by extracting low-dimensional features from your data set using autoencoders. You can also use stacked autoencoders for supervised learning by training and stacking multiple encoders.

This website uses cookies to improve your user experience, personalize content and ads, and analyze website traffic. By continuing to use this website, you consent to our use of cookies. Please see our Privacy Policy to learn more about cookies and how to change your settings.