About:
HDDM is a python toolbox for hierarchical Bayesian parameter
estimation of the Drift Diffusion Model (via PyMC).
Drift Diffusion Models are used widely in psychology and cognitive
neuroscience to study decision making.

Changes:

New and improved HDDM model with the following changes:

Priors: by default model will use informative priors
(see http://ski.clps.brown.edu/hddm_docs/methods.html#hierarchical-drift-diffusion-models-used-in-hddm)
If you want uninformative priors, set informative=False.

Sampling: This model uses slice sampling which leads to faster
convergence while being slower to generate an individual
sample. In our experiments, burnin of 20 is often good enough.

Inter-trial variablity parameters are only estimated at the
group level, not for individual subjects.

About:
Orange is a component-based machine learning and data mining software. It includes a friendly yet powerful and flexible graphical user interface for visual programming. For more advanced use(r)s, [...]

Changes:

The core of the system (except the GUI) no longer includes any GPL code and can be licensed under the terms of BSD upon request. The graphical part remains under GPL.

Changed the BibTeX reference to the paper recently published in JMLR MLOSS.

About:
This package contains a python and a matlab implementation of the most widely used algorithms for multi-armed bandit problems. The purpose of this package is to provide simple environments for comparison and numerical evaluation of policies.

About:
Nimfa is an open-source Python library that provides a unified interface to nonnegative matrix factorization algorithms. It includes implementations of state-of-the-art factorization methods, initialization approaches, and quality scoring. Both dense and sparse matrix representation are supported.

About:
Locally Weighted Projection Regression (LWPR) is a recent algorithm that achieves nonlinear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its [...]

This release aggregates all the changes occurred between official
releases in 0.4 series and various snapshot releases (in 0.5 and 0.6
series). To get better overview of high level changes see
:ref:release notes for 0.5 <chap_release_notes_0.5> and :ref:0.6
<chap_release_notes_0.6> as well as summaries of release candidates
below

Fixes (23 BF commits)

significance level in the right tail was fixed to include the
value tested -- otherwise resulted in optimistic bias (or
absurdly high significance in improbable case if all estimates
having the same value)

Various improvements and increased flexibility of null distribution
estimation of Measures.

All attribute are now reported in sorted order when printing a dataset.

fmri_dataset now also stores the input image type.

Crossvalidation can now take a custom Splitter instance. Moreover, the
default splitter of CrossValidation is more robust in terms of number and
type of created splits for common usage patterns (i.e. together with
partitioners).

CrossValidation takes any custom Node as errorfx argument.

ConfusionMatrix can now be used as an errorfx in Crossvalidation.

LOE(ACC): Linear Order Effect in ACC was added to
ConfusionMatrix to detect trends in performances across
splits.

A Node s postproc is now accessible as a property.

RepeatedMeasure has a new 'concat_as' argument that allows results to be
concatenated along the feature axis. The default behavior, stacking as
multiple samples, is unchanged.

Searchlight now has the ability to mark the center/seed of an ROI in
with a feature attribute in the generated datasets.

String summaries and representations (provided by __str__
and __repr__) were made more exhaustive and more coherent.
Additional properties to access initial constructor arguments
were added to variety of classes.

Also adapts changes from 0.4.6 and 0.4.7 (see corresponding
changelogs).

0.6.0~rc2 (Thu, Mar 3 2011)

Various fixes in the mvpa.atlas module.

0.6.0~rc1 (Thu, Feb 24 2011)

Many, many, many

For an overview of the most drastic changes :ref:see constantly
evolving release notes for 0.6 <chap_release_notes_0.6>

0.5.0 (sometime in March 2010)

This is a special release, because it has never seen the general public.
A summary of fundamental changes introduced in this development version
can be seen in the :ref:release notes <chap_release_notes_0.5>.

Most notably, this version was to first to come with a comprehensive two-day
workshop/tutorial.

0.4.7 (Tue, Mar 07 2011) (Total: 12 commits)

A bugfix release

Fixed

Addressed the issue with input NIfTI files having scl_ fields
set: it could result in incorrect analyses and
map2nifti-produced NIfTI files. Now input files account for
scaling/offset if scl_ fields direct to do so. Moreover upon
map2nifti, those fields get reset.

:file:doc/examples/searchlight_minimal.py - best error is the
minimal one

Enhancements

:class:~mvpa.clfs.gnb.GNB can now tolerate training datasets
with a single label

:class:~mvpa.clfs.meta.TreeClassifier can have trailing nodes
with no classifier assigned

0.4.6 (Tue, Feb 01 2011) (Total: 20 commits)

A bugfix release

Fixed (few BF commits):

Compatibility with numpy 1.5.1 (histogram) and scipy 0.8.0
(workaround for a regression in legendre)

About:
The Maja Machine Learning Framework (MMLF) is a general framework for problems in the domain of Reinforcement Learning (RL) written in python. It provides a set of RL related algorithms and a set of benchmark domains. Furthermore it is easily extensible and allows to automate benchmarking of different agents.

About:
FLANN is a library for performing fast approximate nearest neighbor searches in high dimensional spaces. It contains a collection of algorithms we found to work best for nearest neighbor search.