While OpenCV delivers decent performance out-of-the-box for classical algorithms on desktops, it does not provide sufficient performance for modern CV algorithms, such as a suite of deep learning algorithms, as well as lack on performance on embedded platforms.

The library itself is written in C/C , but Python bindings are provided when running the installer. OpenCV is hands down my favorite computer vision library, but it does have a learning curve.

Be prepared to spend a fair amount of time learning the intricacies of the library and browsing the docs (which have gotten substantially better now that NumPy support has been added).

If you are still testing the computer vision waters, you might want to check out the SimpleCV library mentioned below, which has a substantially smaller learning curve.

If NumPy’s main goal is large, efficient, multi-dimensional array representations, then, by far, the main goal of OpenCV is real-time image processing.

News about OpenCv and Intel Solution

Will demonstrate use of the OpenCL-based transparent API on a popular CV problem – pedestrian detection.

SimpleCV

The goal of SimpleCV is to get you involved in image processing and computer vision as soon as possible. And they do a great job at it. The learning curve is substantially
smaller than that of OpenCV, and as their tagline says, “it’s computer vision made easy”. That all said, because the learning curve is smaller, you don’t have access to as
many of the raw, powerful techniques supplied by OpenCV. If you’re just testing the waters, definitely try this library out.

mahotas

Mahotas, just as OpenCV and SimpleCV, rely on NumPy arrays. Much of the functionality implemented in Mahotas can be found in OpenCV and/or SimpleCV, but in
some cases, the Mahotas interface is just easier to use, especially when it comes to their features package.
scikit-learn

Scikit-learn also includes a handful of image feature extraction functions as well.

scikit-image

Scikit-image is fantastic, but you have to know what you are doing to effectively use this library — and I don’t mean this in a “there is a steep learning curve” type of way. The
learning curve is actually quite low, especially if you check out their gallery.

The algorithms included in scikit-image (I would argue) follow closer to the state-of-the-art in computer vision. New algorithms right from academic papers can be found in scikitimage,
but in order to (effectively) use these algorithms, you need to have developed some rigor and understanding in the computer vision field. If you already have some
experience in computer vision and image processing, definitely check out scikit-image; otherwise, I would continue working with OpenCV and SimpleCV to start.

Elementary parallel programming for Python, The pprocess module provides elementary support for parallel programming in Python using a fork-based process creation model in conjunction with a channel-based communications model implemented using socketpair (or pipes) and poll.

Extracting features from images is inherently a parallelizable task. You can reduce the amount of time it takes to extract features from an entire dataset by using a multithreading/multitasking library.

The h5py package is a Pythonic interface to the HDF5 binary data format. It lets you store huge amounts of numerical data, and easily manipulate that data from NumPy. For example, you can slice into multi-terabyte datasets stored on disk, as if they were real NumPy arrays.

The h5py Python library can help with:

store large numerical datasets.

easily manipulate that data from NumPy

Thousands of datasets can be stored in a single file, categorized and tagged however you want

So, if you have a large dataset represented as a NumPy array, and it won’t fit into memory, or if you want efficient,
persistent storage of NumPy arrays, then h5py is the way to go. One of my favorite techniques is to store my extracted features in a h5py dataset and then apply scikitlearn’s MiniBatchKMeans to cluster the features. The entire dataset never has to be
entirely loaded off disk at once and the memory footprint is extremely small, even for thousands of feature vectors.