The library provides efficient implementations of the following algorithms:

support vector machines for classification

relevance vector machines for regression and classification

reduced set approximation of SV decision surfaces

online kernel RLS regression

online kernelized centroid estimation/one class classifier

online SVM classification

kernel k-means clustering

radial basis function networks

kernelized recursive feature ranking

Bayesian network inference using junction trees or MCMC

The library also comes with extensive documentation and example programs that walk the user through the use of these machine learning techniques.

dlib also comes with a fast matrix library that lets the user use a simple Matlab like syntax. It is also capable of using BLAS libraries such as ATLAS or the Intel MKL when available. Additionally, the use of BLAS is transparent to the user, that is, the dlib matrix object uses BLAS internally to optimize all the various forms of matrix multiplication while still allowing the user to use a simple Matlab like syntax.

Changes to previous version:

This release adds a kernel cache and support for training on highly unbalanced data to the PEGASOS SVM training module. Additionally, the library now includes an implementation of the L-BFGS algorithm for unconstrained optimization.

This release includes a lot of bug fixes, usability enhancements, and speedups. It also includes a new global optimization algorithm as well as new examples showing how to do semantic segmentation using dlib's deep learning tooling.

This release upgrades dlib's CNN+MMOD object detector to support creating multi-class detectors. It also includes significant speed improvements, allowing the detector to run at 98fps when executed on a NVIDIA 1080ti GPU. This release also adds a new
5 point face landmarking model that is over 10x smaller than the 68 point model, runs faster, and works with both HOG and CNN generated face detections. It is now the recommended landmarking model to use for face alignment.

This release brings a lot of new features to dlib. There is a dlib to caffe converter, a bunch of new deep learning layer types, cuDNN v6 and v7 support, and a bunch of optimizations that make things run faster in different situations, like ARM NEON support, which makes HOG based detectors run a lot faster on mobile devices. However, the coolest new feature is an upgrade to the CNN+MMOD object detector to support detecting objects with varying aspect ratios.

This release brings a lot of new features to dlib. There is a dlib to caffe converter, a bunch of new deep learning layer types, cuDNN v6 and v7 support, and a bunch of optimizations that make things run faster in different situations, like ARM NEON support, which makes HOG based detectors run a lot faster on mobile devices. However, the coolest new feature is an upgrade to the CNN+MMOD object detector to support detecting objects with varying aspect ratios.

This release adds a number of new features, most notably new deep learning tools including a state-of-the-art face recognition example using dlib's deep learning API. See http://dlib.net/dnn_face_recognition_ex.cpp.html for an introduction.

This release adds a number of new features, most important of which is a deep convolutional neural network version of the max-margin object detection algorithm. This tool makes it very easy to create high quality object detectors. See http://dlib.net/dnn_mmod_ex.cpp.html for an introduction.

This release adds a deep learning toolkit to dlib that has a clean and fully documented C++11 API. It also includes CPU and GPU support, binds to cuDNN, can train on multiple GPUs at a time, and comes with a pretrained imagenet model based on ResNet34.

The release also adds a number of other improvements such as new elastic net regularized solvers and QP solvers, improved MATLAB binding tools, and other usability tweaks and optimizations.

This release has focused on build system improvements, both for the Python API and C++ builds using CMake. This includes adding a setup.py script for installing the dlib Python API as well as a make install target for installing a C++ shared library for non-Python use.

This release contains mostly minor bug fixes and usability improvements, with the notable exception of new routines for extracting local-binary-pattern features from images and improved tools for learning distance metrics.

In addition to a number of usability improvements, this release adds an implementation of the recent paper "One Millisecond Face Alignment with an Ensemble of Regression Trees" by Vahid Kazemi and Josephine Sullivan. This includes tools for performing high quality face landmarking as well as tools for training new landmarking models. See the face_landmark_detection_ex.cpp and train_shape_predictor_ex.cpp example programs for an introduction.

The major new feature in this release is a Python API for training histogram-of-oriented-gradient based object detectors and examples showing how to use this type of detector to perform real-time face detection. Additionally, this release also adds simpler interfaces for learning to solve assignment and multi-target tracking problems.

This release adds a tool for training histogram-of-oriented-gradient based object detectors and examples showing how to use this type of detector to perform real-time face detection. The release also adds multi-threaded training options for the multiclass classifiers as well as numerous other usability improvements.

This release adds bound constrained non-linear optimizers using the BFGS and L-BFGS methods. It also includes a new tool for learning a max-margin Mahalanobis distance metric as well as routines for easily computing Felzenszwalb's 31 channel HOG image representation.

This release has been focused on improving the speed and usability of dlib's structural support vector machine solver. This includes two new tutorial style example programs showing how to use the solver from either C++ or Python.

This release brings a tool for solving large scale support vector regression problems to the library as well as a structural SVM tool for learning BIO or BILOU style sequence tagging models. It also adds python interfaces to a number of dlib's machine learning tools.

This release includes a large number of new minor features and usability improvements. It also includes a new machine learning tool for learning to rank objects. This is the dlib::svm_rank_trainer, an implementation of the well known SVM-Rank algorithm. Additionally, the implementation runs in O(n*log(n)) time and is therefore suitable for use with large training datasets.

This release brings a number of new features to the library. The highlights include a probabilistic CKY parser, tools for creating applications using the Bulk Synchronous Parallel computing model, and two new clustering algorithms: Chinese Whispers and Newman's modularity clustering.

This release has focused on adding a set of graph cut algorithms. In particular, tools for finding the minimum weight cut on a graph, finding the MAP assignment of a Potts style Markov random field, and strucural SVM tools for learning the parameters of such a Markov model have been added.

This release has focused mostly on minor usability and feature improvements. Some highlights are better support for learning to do sequence labeleing from unbalanced data, new image processing routines, and new tools for performing Kalman filtering and recursive least squares filtering.

This release includes new interfaces to the quadratic program solvers as well as implementations of C-SVM, epsilon-insensitive support vector regression, and one-class SVM algorithms. Additionally, general purpose tools for creating one-vs-one and one-vs-all multiclass classifiers have been added.

The major new feature in this release is a general purpose trust region routine for performing non-linear optimization as well as a Levenberg-Marquardt implementation for solving non-linear least squares problems. This release also includes a variety of feature improvements and optimizations to the linear algebra support library.

This release adds a tool for performing kernel ridge regression on large datasets. It also implements an efficient method for computing leave-one-out cross-validation error rates. Finally, a new example program detailing the steps necessary to create custom matrix expressions is included.

This release adds a kernel cache and support for training on highly unbalanced data to the PEGASOS SVM training module. Additionally, the library now includes an implementation of the L-BFGS algorithm for unconstrained optimization.