About:
The Advanced Data mining And Machine learning System (ADAMS) is a flexible workflow engine aimed at quickly building and maintaining data-driven, reactive workflows, easily integrated into business processes.

Changes:

Some highlights:

Over 80 new actors, nearly 30 new conversions

Weka Investigator -- the big brother of the Weka Explorer, or how to be
more efficient with less clicks using multiple datasets in multiple sessions
and multiple predefined outputs per evaluation run

File commander -- dual-pane file manager (inspired by Norton/Midnight commander)
that allows you to manage local and remote files (ftp, sftp, smb); usually faster
than native file managers (like Windows Explorer, Nautilus, Caja) in terms of
handling 10s of thousand of files in a single directory

experimental deeplearning4j module

module for querying/consuming webservices using Groovy

basic terminal-based GUI for remote machines (eg cloud)

many interactive actors can be used in headless environment now as well

Fixed a memory leak introduced by Java's logging framework

Flow editor now has predefined rules for swapping actors, e.g. Trigger
with Tee or ConditionalTrigger, maintaining as many options as possible
(including any sub-actors).

About:
DIANNE is a modular software framework for designing, training and evaluating artificial neural networks on heterogeneous, distributed infrastructure . It is built on top of OSGi and AIOLOS and can transparently deploy and redeploy (parts of) a neural network on multiple machines, as well as scale up training on a compute cluster.

About:
The GPML toolbox is a flexible and generic Octave/Matlab implementation of inference and prediction with Gaussian process models. The toolbox offers exact inference, approximate inference for non-Gaussian likelihoods (Laplace's Method, Expectation Propagation, Variational Bayes) as well for large datasets (FITC, VFE, KISS-GP). A wide range of covariance, likelihood, mean and hyperprior functions allows to create very complex GP models.

Changes:

A major code restructuring effort did take place in the current release unifying certain inference functions and allowing more flexibility in covariance function composition. We also redesigned the whole derivative computation pipeline to strongly improve the overall runtime. We finally include grid-based covariance approximations natively.