Calling np.fix, np.isposinf, and np.isneginf with f(x,y=out)
is deprecated - the argument should be passed as f(x,out=out), which
matches other ufunc-like interfaces.

Use of the C-API NPY_CHAR type number deprecated since version 1.7 will
now raise deprecation warnings at runtime. Extensions built with older f2py
versions need to be recompiled to remove the warning.

np.ma.argsort, np.ma.minimum.reduce, and np.ma.maximum.reduce
should be called with an explicit axis argument when applied to arrays with
more than 2 dimensions, as the default value of this argument (None) is
inconsistent with the rest of numpy (-1, 0, and 0, respectively).

np.ma.MaskedArray.mini is deprecated, as it almost duplicates the
functionality of np.MaskedArray.min. Exactly equivalent behaviour
can be obtained with np.ma.minimum.reduce.

The single-argument form of np.ma.minimum and np.ma.maximum is
deprecated. np.maximum. np.ma.minimum(x) should now be spelt
np.ma.minimum.reduce(x), which is consistent with how this would be done
with np.minimum.

Calling ndarray.conjugate on non-numeric dtypes is deprecated (it
should match the behavior of np.conjugate, which throws an error).

Calling expand_dims when the axis keyword does not satisfy
-a.ndim-1<=axis<=a.ndim, where a is the array being reshaped,
is deprecated.

Assignment between structured arrays with different field names will change
in NumPy 1.14. Previously, fields in the dst would be set to the value of the
identically-named field in the src. In numpy 1.14 fields will instead be
assigned ‘by position’: The n-th field of the dst will be set to the n-th
field of the src array. Note that the FutureWarning raised in NumPy 1.12
incorrectly reported this change as scheduled for NumPy 1.13 rather than
NumPy 1.14.

numpy.hstack() now throws ValueError instead of IndexError when
input is empty.

Functions taking an axis argument, when that argument is out of range, now
throw np.AxisError instead of a mixture of IndexError and
ValueError. For backwards compatibility, AxisError subclasses both of
these.

Support has been removed for certain obscure dtypes that were unintentionally
allowed, of the form (old_dtype,new_dtype), where either of the dtypes
is or contains the object dtype. As an exception, dtypes of the form
(object,[('name',object)]) are still supported due to evidence of
existing use.

Previously bool(dtype) would fall back to the default python
implementation, which checked if len(dtype)>0. Since dtype objects
implement __len__ as the number of record fields, bool of scalar dtypes
would evaluate to False, which was unintuitive. Now bool(dtype)==True
for all dtypes.

__getslice__ and __setslice__ are no longer needed in ndarray subclasses¶

When subclassing np.ndarray in Python 2.7, it is no longer _necessary_ to
implement __*slice__ on the derived class, as __*item__ will intercept
these calls correctly.

Any code that did implement these will work exactly as before. Code that
invokes``ndarray.__getslice__`` (e.g. through super(...).__getslice__) will
now issue a DeprecationWarning - .__getitem__(slice(start,end)) should be
used instead.

It is now allowed to remove a zero-sized axis from NpyIter. Which may mean
that code removing axes from NpyIter has to add an additional check when
accessing the removed dimensions later on.

The largest followup change is that gufuncs are now allowed to have zero-sized
inner dimensions. This means that a gufunc now has to anticipate an empty inner
dimension, while this was never possible and an error raised instead.

For most gufuncs no change should be necessary. However, it is now possible
for gufuncs with a signature such as (...,N,M)->(...,M) to return
a valid result if N=0 without further wrapping code.

Similar to PyArray_MapIterArray but with an additional copy_if_overlap
argument. If copy_if_overlap!=0, checks if input has memory overlap with
any of the other arrays and make copies as appropriate to avoid problems if the
input is modified during the iteration. See the documentation for more complete
documentation.

This is the renamed and redesigned __numpy_ufunc__. Any class, ndarray
subclass or not, can define this method or set it to None in order to
override the behavior of NumPy’s ufuncs. This works quite similarly to Python’s
__mul__ and other binary operation routines. See the documentation for a
more detailed description of the implementation and behavior of this new
option. The API is provisional, we do not yet guarantee backward compatibility
as modifications may be made pending feedback. See the NEP and
documentation for more details.

This ufunc corresponds to the Python builtin divmod, and is used to implement
divmod when called on numpy arrays. np.divmod(x,y) calculates a result
equivalent to (np.floor_divide(x,y),np.remainder(x,y)) but is
approximately twice as fast as calling the functions separately.

Add a new block function to the current stacking functions vstack,
hstack, and stack. This allows concatenation across multiple axes
simultaneously, with a similar syntax to array creation, but where elements
can themselves be arrays. For instance:

On platforms providing the backtrace function NumPy will try to avoid
creating temporaries in expression involving basic numeric types.
For example d=a+b+c is transformed to d=a+b;d+=c which can
improve performance for large arrays as less memory bandwidth is required to
perform the operation.

Support for returning arrays of arbitrary dimensions in apply_along_axis¶

Previously, only scalars or 1D arrays could be returned by the function passed
to apply_along_axis. Now, it can return an array of any dimensionality
(including 0D), and the shape of this array replaces the axis of the array
being iterated over.

NumPy now supports memory tracing with tracemalloc module of Python 3.6 or
newer. Memory allocations from NumPy are placed into the domain defined by
numpy.lib.tracemalloc_domain.
Note that NumPy allocation will not show up in tracemalloc of earlier Python
versions.

Setting NPY_RELAXED_STRIDES_DEBUG=1 in the environment when relaxed stride
checking is enabled will cause NumPy to be compiled with the affected strides
set to the maximum value of npy_intp in order to help detect invalid usage of
the strides in downstream projects. When enabled, invalid usage often results
in an error being raised, but the exact type of error depends on the details of
the code. TypeError and OverflowError have been observed in the wild.

It was previously the case that this option was disabled for releases and
enabled in master and changing between the two required editing the code. It is
now disabled by default but can be enabled for test builds.

Operations where ufunc input and output operands have memory overlap
produced undefined results in previous NumPy versions, due to data
dependency issues. In NumPy 1.13.0, results from such operations are
now defined to be the same as for equivalent operations where there is
no memory overlap.

Operations affected now make temporary copies, as needed to eliminate
data dependency. As detecting these cases is computationally
expensive, a heuristic is used, which may in rare cases result to
needless temporary copies. For operations where the data dependency
is simple enough for the heuristic to analyze, temporary copies will
not be made even if the arrays overlap, if it can be deduced copies
are not necessary. As an example,``np.add(a, b, out=a)`` will not
involve copies.

To illustrate a previously undefined operation:

>>> x=np.arange(16).astype(float)>>> np.add(x[1:],x[:-1],out=x[1:])

In NumPy 1.13.0 the last line is guaranteed to be equivalent to:

>>> np.add(x[1:].copy(),x[:-1].copy(),out=x[1:])

A similar operation with simple non-problematic data dependence is:

>>> x=np.arange(16).astype(float)>>> np.add(x[1:],x[:-1],out=x[:-1])

It will continue to produce the same results as in previous NumPy
versions, and will not involve unnecessary temporary copies.

The change applies also to in-place binary operations, for example:

>>> x=np.random.rand(500,500)>>> x+=x.T

This statement is now guaranteed to be equivalent to x[...]=x+x.T,
whereas in previous NumPy versions the results were undefined.

Extensions that incorporate Fortran libraries can now be built using the free
MinGW toolset, also under Python 3.5. This works best for extensions that only
do calculations and uses the runtime modestly (reading and writing from files,
for instance). Note that this does not remove the need for Mingwpy; if you make
extensive use of the runtime, you will most likely run into issues. Instead,
it should be regarded as a band-aid until Mingwpy is fully functional.

Extensions can also be compiled using the MinGW toolset using the runtime
library from the (moveable) WinPython 3.4 distribution, which can be useful for
programs with a PySide1/Qt4 front-end.

In previous versions of NumPy, the finfo function returned invalid
information about the double double format of the longdouble float type
on Power PC (PPC). The invalid values resulted from the failure of the NumPy
algorithm to deal with the variable number of digits in the significand
that are a feature of PPC long doubles. This release by-passes the failing
algorithm by using heuristics to detect the presence of the PPC double double
format. A side-effect of using these heuristics is that the finfo
function is faster than previous releases.

Comparisons of masked arrays were buggy for masked scalars and failed for
structured arrays with dimension higher than one. Both problems are now
solved. In the process, it was ensured that in getting the result for a
structured array, masked fields are properly ignored, i.e., the result is equal
if all fields that are non-masked in both are equal, thus making the behaviour
identical to what one gets by comparing an unstructured masked array and then
doing .all() over some axis.

np.matrix with booleans elements can now be created using the string syntax¶

np.matrix failed whenever one attempts to use it with booleans, e.g.,
np.matrix('True'). Now, this works as expected.

NumPy comes bundled with a minimal implementation of lapack for systems without
a lapack library installed, under the name of lapack_lite. This has been
upgraded from LAPACK 3.0.0 (June 30, 1999) to LAPACK 3.2.2 (June 30, 2010). See
the LAPACK changelogs for details on the all the changes this entails.

While no new features are exposed through numpy, this fixes some bugs
regarding “workspace” sizes, and in some places may use faster algorithms.

By default, argsort now places the masked values at the end of the sorted
array, in the same way that sort already did. Additionally, the
end_with argument is added to argsort, for consistency with sort.
Note that this argument is not added at the end, so breaks any code that
passed fill_value as a positional argument.

For ndarray subclasses, numpy.average will now return an instance of the
subclass, matching the behavior of most other NumPy functions such as mean.
As a consequence, also calls that returned a scalar may now return a subclass
array scalar.

Previously, these functions always treated identical objects as equal. This had
the effect of overriding comparison failures, comparison of objects that did
not return booleans, such as np.arrays, and comparison of objects where the
results differed from object identity, such as NaNs.

It is now possible to adjust the behavior the function will have when dealing
with the covariance matrix by using two new keyword arguments:

tol can be used to specify a tolerance to use when checking that
the covariance matrix is positive semidefinite.

check_valid can be used to configure what the function will do in the
presence of a matrix that is not positive semidefinite. Valid options are
ignore, warn and raise. The default value, warn keeps the
the behavior used on previous releases.

Previously, np.testing.assert_array_less ignored all infinite values. This
is not the expected behavior both according to documentation and intuitively.
Now, -inf < x < inf is considered True for any real number x and all
other cases fail.

Some warnings that were previously hidden by the assert_array_
functions are not hidden anymore. In most cases the warnings should be
correct and, should they occur, will require changes to the tests using
these functions.
For the masked array assert_equal version, warnings may occur when
comparing NaT. The function presently does not handle NaT or NaN
specifically and it may be best to avoid it at this time should a warning
show up due to this change.

The ABCPolyBase class, from which the convenience classes are derived, sets
__array_ufun__=None in order of opt out of ufuncs. If a polynomial
convenience class instance is passed as an argument to a ufunc, a TypeError
will now be raised.

For calls to ufuncs, it was already possible, and recommended, to use an
out argument with a tuple for ufuncs with multiple outputs. This has now
been extended to output arguments in the reduce, accumulate, and
reduceat methods. This is mostly for compatibility with __array_ufunc;
there are no ufuncs yet that have more than one output.

NumPy 1.12.1 supports Python 2.7 and 3.4 - 3.6 and fixes bugs and regressions
found in NumPy 1.12.0. In particular, the regression in f2py constant parsing
is fixed. Wheels for Linux, Windows, and OSX can be found on pypi,

The NumPy 1.12.0 release contains a large number of fixes and improvements, but
few that stand out above all others. That makes picking out the highlights
somewhat arbitrary but the following may be of particular interest or indicate
areas likely to have future consequences.

Order of operations in np.einsum can now be optimized for large speed improvements.

New signature argument to np.vectorize for vectorizing with core dimensions.

If a ‘width’ parameter is passed into binary_repr that is insufficient to
represent the number in base 2 (positive) or 2’s complement (negative) form,
the function used to silently ignore the parameter and return a representation
using the minimal number of bits needed for the form in question. Such behavior
is now considered unsafe from a user perspective and will raise an error in the
future.

In 1.13 NAT will always compare False except for NAT!=NAT,
which will be True. In short, NAT will behave like NaN

In 1.13 np.average will preserve subclasses, to match the behavior of most
other numpy functions such as np.mean. In particular, this means calls which
returned a scalar may return a 0-d subclass object instead.

In 1.13 the behavior of structured arrays involving multiple fields will change
in two ways:

First, indexing a structured array with multiple fields (eg,
arr[['f1','f3']]) will return a view into the original array in 1.13,
instead of a copy. Note the returned view will have extra padding bytes
corresponding to intervening fields in the original array, unlike the copy in
1.12, which will affect code such as arr[['f1','f3']].view(newdtype).

Second, for numpy versions 1.6 to 1.12 assignment between structured arrays
occurs “by field name”: Fields in the destination array are set to the
identically-named field in the source array or to 0 if the source does not have
a field:

In 1.13 assignment will instead occur “by position”: The Nth field of the
destination will be set to the Nth field of the source regardless of field
name. The old behavior can be obtained by using indexing to reorder the fields
before
assignment, e.g., b[['x','y']]=a[['y','x']].

The remaining integers returned zero when raised to negative integer powers.

For scalars

Zero to negative integer powers returned least integral value.

Both 1, -1 to negative integer powers returned correct values.

The remaining integers sometimes returned zero, sometimes the
correct float depending on the integer type combination.

All of these cases now raise a ValueError except for those integer
combinations whose common type is float, for instance uint64 and int8. It was
felt that a simple rule was the best way to go rather than have special
exceptions for the integer units. If you need negative powers, use an inexact
type.

numpy functions that take a keepdims kwarg now pass the value
through to the corresponding methods on ndarray sub-classes. Previously the
keepdims keyword would be silently dropped. These functions now have
the following behavior:

If user does not provide keepdims, no keyword is passed to the underlying
method.

Any user-provided value of keepdims is passed through as a keyword
argument to the method.

This will raise in the case where the method does not support a
keepdims kwarg and the user explicitly passes in keepdims.

The precision check for scalars has been changed to match that for arrays. It
is now:

abs(actual-desired)<1.5*10**(-decimal)

Note that this is looser than previously documented, but agrees with the
previous implementation used in assert_array_almost_equal. Due to the
change in implementation some very delicate tests may fail that did not
fail before.

When raise_warnings="develop" is given, all uncaught warnings will now
be considered a test failure. Previously only selected ones were raised.
Warnings which are not caught or raised (mostly when in release mode)
will be shown once during the test cycle similar to the default python
settings.

The assert_warns function and context manager are now more specific
to the given warning category. This increased specificity leads to them
being handled according to the outer warning settings. This means that
no warning may be raised in cases where a wrong category warning is given
and ignored outside the context. Alternatively the increased specificity
may mean that warnings that were incorrectly ignored will now be shown
or raised. See also the new suppress_warnings context manager.
The same is true for the deprecated decorator.

Binary distributions of numpy may need to run specific hardware checks or load
specific libraries during numpy initialization. For example, if we are
distributing numpy with a BLAS library that requires SSE2 instructions, we
would like to check the machine on which numpy is running does have SSE2 in
order to give an informative error.

Add a hook in numpy/__init__.py to import a numpy/_distributor_init.py
file that will remain empty (bar a docstring) in the standard numpy source,
but that can be overwritten by people making binary distributions of numpy.

The new function polyvalfromroots evaluates a polynomial at given points
from the roots of the polynomial. This is useful for higher order polynomials,
where expansion into polynomial coefficients is inaccurate at machine
precision.

The new function geomspace generates a geometric sequence. It is similar
to logspace, but with start and stop specified directly:
geomspace(start,stop) behaves the same as
logspace(log10(start),log10(stop)).

A new context manager suppress_warnings has been added to the testing
utils. This context manager is designed to help reliably test warnings.
Specifically to reliably filter/ignore warnings. Ignoring warnings
by using an “ignore” filter in Python versions before 3.4.x can quickly
result in these (or similar) warnings not being tested reliably.

The context manager allows to filter (as well as record) warnings similar
to the catch_warnings context, but allows for easier specificity.
Also printing warnings that have not been filtered or nesting the
context manager will work as expected. Additionally, it is possible
to use the context manager as a decorator which can be useful when
multiple tests give need to hide the same warning.

These functions wrapped the non-masked versions, but propagate through masked
values. There are two different propagation modes. The default causes masked
values to contaminate the result with masks, but the other mode only outputs
masks if there is no alternative.

The new float_power ufunc is like the power function except all
computation is done in a minimum precision of float64. There was a long
discussion on the numpy mailing list of how to treat integers to negative
integer powers and a popular proposal was that the __pow__ operator should
always return results of at least float64 precision. The float_power
function implements that option. Note that it does not support object arrays.

This argument allows for vectorizing user defined functions with core
dimensions, in the style of NumPy’s
generalized universal functions. This allows
for vectorizing a much broader class of functions. For example, an arbitrary
distance metric that combines two vectors to produce a scalar could be
vectorized with signature='(n),(n)->()'. See np.vectorize for full
details.

The previous identity was 1 with the result that all bits except the LSB were
masked out when the reduce method was used. The new identity is -1, which
should work properly on twos complement machines as all bits will be set to
one.

The caches in np.fft that speed up successive FFTs of the same length can no
longer grow without bounds. They have been replaced with LRU (least recently
used) caches that automatically evict no longer needed items if either the
memory size or item count limit has been reached.

Fixed several interfaces that explicitly disallowed arrays with zero-width
string dtypes (i.e. dtype('S0') or dtype('U0'), and fixed several
bugs where such dtypes were not handled properly. In particular, changed
ndarray.__new__ to not implicitly convert dtype('S0') to
dtype('S1') (and likewise for unicode) when creating new arrays.

np.einsum now supports the optimize argument which will optimize the
order of contraction. For example, np.einsum would complete the chain dot
example np.einsum(‘ij,jk,kl->il’,a,b,c) in a single pass which would
scale like N^4; however, when optimize=Truenp.einsum will create
an intermediate array to reduce this scaling to N^3 or effectively
np.dot(a,b).dot(c). Usage of intermediate tensors to reduce scaling has
been applied to the general einsum summation notation. See np.einsum_path
for more details.

The quicksort kind of np.sort and np.argsort is now an introsort which
is regular quicksort but changing to a heapsort when not enough progress is
made. This retains the good quicksort performance while changing the worst case
runtime from O(N^2) to O(N*log(N)).

The ediff1d function uses an array instead on a flat iterator for the
subtraction. When to_begin or to_end is not None, the subtraction is performed
in place to eliminate a copy operation. A side effect is that certain
subclasses are handled better, namely astropy.Quantity, since the complete
array is created, wrapped, and then begin and end values are set, instead of
using concatenate.

The computation of the mean of float16 arrays is now carried out in float32 for
improved precision. This should be useful in packages such as Theano
where the precision of float16 is adequate and its smaller footprint is
desirable.

All array-like methods are now called with keyword arguments in fromnumeric.py¶

Internally, many array-like methods in fromnumeric.py were being called with
positional arguments instead of keyword arguments as their external signatures
were doing. This caused a complication in the downstream ‘pandas’ library
that encountered an issue with ‘numpy’ compatibility. Now, all array-like
methods in this module are called with keyword arguments instead.

Previously operations on a memmap object would misleadingly return a memmap
instance even if the result was actually not memmapped. For example,
arr+1 or arr+arr would return memmap instances, although no memory
from the output array is memmapped. Version 1.12 returns ordinary numpy arrays
from these operations.

Also, reduction of a memmap (e.g. .sum(axis=None) now returns a numpy
scalar instead of a 0d memmap.

The stacklevel for python based warnings was increased so that most warnings
will report the offending line of the user code instead of the line the
warning itself is given. Passing of stacklevel is now tested to ensure that
new warnings will receive the stacklevel argument.

This causes warnings with the “default” or “module” filter to be shown once
for every offending user code line or user module instead of only once. On
python versions before 3.4, this can cause warnings to appear that were falsely
ignored before, which may be surprising especially in test suits.

Numpy 1.11.3 fixes a bug that leads to file corruption when very large files
opened in append mode are used in ndarray.tofile. It supports Python
versions 2.6 - 2.7 and 3.2 - 3.5. Wheels for Linux, Windows, and OS X can be
found on PyPI.

Numpy 1.11.2 supports Python 2.6 - 2.7 and 3.2 - 3.5. It fixes bugs and
regressions found in Numpy 1.11.1 and includes several build related
improvements. Wheels for Linux, Windows, and OS X can be found on PyPI.

Numpy 1.11.1 supports Python 2.6 - 2.7 and 3.2 - 3.5. It fixes bugs and
regressions found in Numpy 1.11.0 and includes several build related
improvements. Wheels for Linux, Windows, and OSX can be found on pypi.

Numpy now uses setuptools for its builds instead of plain distutils.
This fixes usage of install_requires='numpy' in the setup.py files of
projects that depend on Numpy (see gh-6551). It potentially affects the way
that build/install methods for Numpy itself behave though. Please report any
unexpected behavior on the Numpy issue tracker.

Relaxed stride checking will become the default. See the 1.8.0 release
notes for a more extended discussion of what this change implies.

The behavior of the datetime64 “not a time” (NaT) value will be changed
to match that of floating point “not a number” (NaN) values: all
comparisons involving NaT will return False, except for NaT != NaT which
will return True.

Non-integers used as index values will raise TypeError,
e.g., in reshape, take, and specifying reduce axis.

In a future release the following changes will be made.

The rand function exposed in numpy.testing will be removed. That
function is left over from early Numpy and was implemented using the
Python random module. The random number generators from numpy.random
should be used instead.

The ndarray.view method will only allow c_contiguous arrays to be
viewed using a dtype of different size causing the last dimension to
change. That differs from the current behavior where arrays that are
f_contiguous but not c_contiguous can be viewed as a dtype type of
different size causing the first dimension to change.

Slicing a MaskedArray will return views of both data and mask.
Currently the mask is copy-on-write and changes to the mask in the slice do
not propagate to the original mask. See the FutureWarnings section below for
details.

A consensus of datetime64 users agreed that this behavior is undesirable
and at odds with how datetime64 is usually used (e.g., by pandas). For most use cases, a timezone naive datetime
type is preferred, similar to the datetime.datetime type in the Python
standard library. Accordingly, datetime64 no longer assumes that input is in
local time, nor does it print local times:

For backwards compatibility, datetime64 still parses timezone offsets, which
it handles by converting to UTC. However, the resulting datetime is timezone
naive:

>>> np.datetime64('2000-01-01T00:00:00-08')DeprecationWarning: parsing timezone aware datetimes is deprecated;this will raise an error in the futurenumpy.datetime64('2000-01-01T08:00:00')

As a corollary to this change, we no longer prohibit casting between datetimes
with date units and datetimes with time units. With timezone naive datetimes,
the rule for casting from dates to times is no longer ambiguous.

This behaviour mimics that of other functions such as np.inner. If the two
arguments cannot be cast to a common type, it could have raised a TypeError
or ValueError depending on their order. Now, np.dot will now always
raise a TypeError.

In np.lib.split an empty array in the result always had dimension
(0,) no matter the dimensions of the array being split. This
has been changed so that the dimensions will be preserved. A
FutureWarning for this change has been in place since Numpy 1.9 but,
due to a bug, sometimes no warning was raised and the dimensions were
already preserved.

These operators are implemented with the remainder and floor_divide
functions respectively. Those functions are now based around fmod and are
computed together so as to be compatible with each other and with the Python
versions for float types. The results should be marginally more accurate or
outright bug fixes compared to the previous results, but they may
differ significantly in cases where roundoff makes a difference in the integer
returned by floor_divide. Some corner cases also change, for instance, NaN
is always returned for both functions when the divisor is zero,
divmod(1.0,inf) returns (0.0,1.0) except on MSVC 2008, and
divmod(-1.0,inf) returns (-1.0,inf).

Removed the check_return and inner_loop_selector members of
the PyUFuncObject struct (replacing them with reserved slots
to preserve struct layout). These were never used for anything, so
it’s unlikely that any third-party code is using them either, but we
mention it here for completeness.

In python 2, objects which are instances of old-style user-defined classes no
longer automatically count as ‘object’ type in the dtype-detection handler.
Instead, as in python 3, they may potentially count as sequences, but only if
they define both a __len__ and a __getitem__ method. This fixes a segfault
and inconsistency between python 2 and 3.

np.histogram now provides plugin estimators for automatically
estimating the optimal number of bins. Passing one of [‘auto’, ‘fd’,
‘scott’, ‘rice’, ‘sturges’] as the argument to ‘bins’ results in the
corresponding estimator being used.

A benchmark suite using Airspeed Velocity has been added, converting the
previous vbench-based one. You can run the suite locally via pythonruntests.py--bench. For more details, see benchmarks/README.rst.

A new function np.shares_memory that can check exactly whether two
arrays have memory overlap is added. np.may_share_memory also now has
an option to spend more effort to reduce false positives.

SkipTest and KnownFailureException exception classes are exposed
in the numpy.testing namespace. Raise them in a test function to mark
the test to be skipped or mark it as a known failure, respectively.

f2py.compile has a new extension keyword parameter that allows the
fortran extension to be specified for generated temp files. For instance,
the files can be specifies to be *.f90. The verbose argument is
also activated, it was previously ignored.

A dtype parameter has been added to np.random.randint
Random ndarrays of the following types can now be generated:

np.bool,

np.int8, np.uint8,

np.int16, np.uint16,

np.int32, np.uint32,

np.int64, np.uint64,

np.int_``,``np.intp

The specification is by precision rather than by C type. Hence, on some
platforms np.int64 may be a long instead of longlong even if
the specified dtype is longlong because the two may have the same
precision. The resulting type depends on which C type numpy uses for the
given precision. The byteorder specification is also ignored, the
generated arrays are always in native byte order.

A new np.moveaxis function allows for moving one or more array axes
to a new position by explicitly providing source and destination axes.
This function should be easier to use than the current rollaxis
function as well as providing more functionality.

The deg parameter of the various numpy.polynomial fits has been
extended to accept a list of the degrees of the terms to be included in
the fit, the coefficients of all other terms being constrained to zero.
The change is backward compatible, passing a scalar deg will behave
as before.

A divmod function for float types modeled after the Python version has
been added to the npy_math library.

When constructing a new MaskedArray instance, it can be configured with
an order argument analogous to the one when calling np.ndarray. The
addition of this argument allows for the proper processing of an order
argument in several MaskedArray-related utility functions such as
np.ma.core.array and np.ma.core.asarray.

Creating a masked array with mask=True (resp. mask=False) now uses
np.ones (resp. np.zeros) to create the mask, which is faster and
avoid a big memory peak. Another optimization was done to avoid a memory
peak and useless computations when printing a masked array.

Previously, gemm BLAS operations were used for all matrix products. Now,
if the matrix product is between a matrix and its transpose, it will use
syrk BLAS operations for a performance boost. This optimization has been
extended to @, numpy.dot, numpy.inner, and numpy.matmul.

Note: Requires the transposed and non-transposed matrices to share data.

The method build_src.generate_a_pyrex_source will remain available; it
has been monkeypatched by users to support Cython instead of Pyrex. It’s
recommended to switch to a better supported method of build Cython
extensions though.

This behaviour mimics that of other functions such as np.inner. If the two
arguments cannot be cast to a common type, it could have raised a TypeError
or ValueError depending on their order. Now, np.dot will now always
raise a TypeError.

The linalg.norm function now does all its computations in floating point
and returns floating results. This change fixes bugs due to integer overflow
and the failure of abs with signed integers of minimum value, e.g., int8(-128).
For consistancy, floats are used even where an integer might work.

The F_CONTIGUOUS flag was used to signal that views using a dtype that
changed the element size would change the first index. This was always
problematical for arrays that were both F_CONTIGUOUS and C_CONTIGUOUS
because C_CONTIGUOUS took precedence. Relaxed stride checking results in
more such dual contiguous arrays and breaks some existing code as a result.
Note that this also affects changing the dtype by assigning to the dtype
attribute of an array. The aim of this deprecation is to restrict views to
C_CONTIGUOUS arrays at some future time. A work around that is backward
compatible is to use a.T.view(...).T instead. A parameter may also be
added to the view method to explicitly ask for Fortran order views, but
that will not be backward compatible.

It is currently possible to pass in arguments for the order
parameter in methods like array.flatten or array.ravel
that were not one of the following: ‘C’, ‘F’, ‘A’, ‘K’ (note that
all of these possible values are both unicode and case insensitive).
Such behavior will not be allowed in future releases.

The Python standard library random number generator was previously exposed
in the testing namespace as testing.rand. Using this generator is
not recommended and it will be removed in a future release. Use generators
from numpy.random namespace instead.

In accordance with the Python C API, which gives preference to the half-open
interval over the closed one, np.random.random_integers is being
deprecated in favor of calling np.random.randint, which has been
enhanced with the dtype parameter as described under “New Features”.
However, np.random.random_integers will not be removed anytime soon.

Currently a slice of a masked array contains a view of the original data and a
copy-on-write view of the mask. Consequently, any changes to the slice’s mask
will result in a copy of the original mask being made and that new mask being
changed rather than the original. For example, if we make a slice of the
original like so, view=original[:], then modifications to the data in one
array will affect the data of the other but, because the mask will be copied
during assignment operations, changes to the mask will remain local. A similar
situation occurs when explicitly constructing a masked array using
MaskedArray(data,mask), the returned array will contain a view of data
but the mask will be a copy-on-write view of mask.

In the future, these cases will be normalized so that the data and mask arrays
are treated the same way and modifications to either will propagate between
views. In 1.11, numpy will issue a MaskedArrayFutureWarning warning
whenever user code modifies the mask of a view that in the future may cause
values to propagate back to the original. To silence these warnings and make
your code robust against the upcoming changes, you have two options: if you
want to keep the current behavior, call masked_view.unshare_mask() before
modifying the mask. If you want to get the future behavior early, use
masked_view._sharedmask=False. However, note that setting the
_sharedmask attribute will break following explicit calls to
masked_view.unshare_mask().

This release is a bugfix source release motivated by a segfault regression.
No windows binaries are provided for this release, as there appear to be
bugs in the toolchain we use to generate those files. Hopefully that
problem will be fixed for the next release. In the meantime, we suggest
using one of the providers of windows binaries.

The trace function now calls the trace method on subclasses of ndarray,
except for matrix, for which the current behavior is preserved. This is
to help with the units package of AstroPy and hopefully will not cause
problems.

Relaxed stride checking revealed a bug in array_is_fortran(a), that was
using PyArray_ISFORTRAN to check for Fortran contiguity instead of
PyArray_IS_F_CONTIGUOUS. You may want to regenerate swigged files using the
updated numpy.i

This deprecates assignment of a new descriptor to the dtype attribute of
a non-C-contiguous array if it result in changing the shape. This
effectively bars viewing a multidimensional Fortran array using a dtype
that changes the element size along the first axis.

The reason for the deprecation is that, when relaxed strides checking is
enabled, arrays that are both C and Fortran contiguous are always treated
as C contiguous which breaks some code that depended the two being mutually
exclusive for non-scalar arrays of ndim > 1. This deprecation prepares the
way to always enable relaxed stride checking.

Initial support for mingwpy was reverted as it was causing problems for
non-windows builds.

gh-6536 BUG: Revert gh-5614 to fix non-windows build problems

A fix for np.lib.split was reverted because it resulted in “fixing”
behavior that will be present in the Numpy 1.11 and that was already
present in Numpy 1.9. See the discussion of the issue at gh-6575 for
clarification.

gh-6576 BUG: Revert gh-6376 to fix split behavior for empty arrays.

Relaxed stride checking was reverted. There were back compatibility
problems involving views changing the dtype of multidimensional Fortran
arrays that need to be dealt with over a longer timeframe.

This release deals with a few build problems that showed up in 1.10.0. Most
users would not have seen these problems. The differences are:

Compiling with msvc9 or msvc10 for 32 bit Windows now requires SSE2.
This was the easiest fix for what looked to be some miscompiled code when
SSE2 was not used. If you need to compile for 32 bit Windows systems
without SSE2 support, mingw32 should still work.

Make compiling with VS2008 python2.7 SDK easier

Change Intel compiler options so that code will also be generated to
support systems without SSE4.2.

Some _config test functions needed an explicit integer return in
order to avoid the openSUSE rpmlinter erring out.

We ran into a problem with pipy not allowing reuse of filenames and a
resulting proliferation of ..*.postN releases. Not only were the names
getting out of hand, some packages were unable to work with the postN
suffix.

In array comparisons like arr1==arr2, many corner cases
involving strings or structured dtypes that used to return scalars
now issue FutureWarning or DeprecationWarning, and in the
future will be change to either perform elementwise comparisons or
raise an error.

In np.lib.split an empty array in the result always had dimension
(0,) no matter the dimensions of the array being split. In Numpy 1.11
that behavior will be changed so that the dimensions will be preserved. A
FutureWarning for this change has been in place since Numpy 1.9 but,
due to a bug, sometimes no warning was raised and the dimensions were
already preserved.

Default casting for inplace operations has changed to 'same_kind'. For
instance, if n is an array of integers, and f is an array of floats, then
n+=f will result in a TypeError, whereas in previous Numpy
versions the floats would be silently cast to ints. In the unlikely case
that the example code is not an actual bug, it can be updated in a backward
compatible way by rewriting it as np.add(n,f,out=n,casting='unsafe').
The old 'unsafe' default has been deprecated since Numpy 1.7.

UPDATE: In 1.10.2 the default value of NPY_RELAXED_STRIDE_CHECKING was
changed to false for back compatibility reasons. More time is needed before
it can be made the default. As part of the roadmap a deprecation of
dimension changing views of f_contiguous not c_contiguous arrays was also
added.

There was inconsistent behavior between x.ravel() and np.ravel(x), as
well as between x.diagonal() and np.diagonal(x), with the methods
preserving subtypes while the functions did not. This has been fixed and
the functions now behave like the methods, preserving subtypes except in
the case of matrices. Matrices are special cased for backward
compatibility and still return 1-D arrays as before. If you need to
preserve the matrix subtype, use the methods instead of the functions.

Previously, an inconsistency existed between 1-D inputs (returning a
base ndarray) and higher dimensional ones (which preserved subclasses).
Behavior has been unified, and the return will now be a base ndarray.
Subclasses can still override this behavior by providing their own
nonzero method.

Previously the returned types for recarray fields accessed by attribute and by
index were inconsistent, and fields of string type were returned as chararrays.
Now, fields accessed by either attribute or indexing will return an ndarray for
fields of non-structured type, and a recarray for fields of structured type.
Notably, this affect recarrays containing strings with whitespace, as trailing
whitespace is trimmed from chararrays but kept in ndarrays of string type.
Also, the dtype.type of nested structured fields is now inherited.

Viewing an ndarray as a recarray now automatically converts the dtype to
np.record. See new record array documentation. Additionally, viewing a recarray
with a non-structured dtype no longer converts the result’s type to ndarray -
the result will remain a recarray.

When using the ‘out’ keyword argument of a ufunc, a tuple of arrays, one per
ufunc output, can be provided. For ufuncs with a single output a single array
is also a valid ‘out’ keyword argument. Previously a single array could be
provided in the ‘out’ keyword argument, and it would be used as the first
output for ufuncs with multiple outputs, is deprecated, and will result in a
DeprecationWarning now and an error in the future.

Similar to mean, median and percentile now emits a Runtime warning and
returns NaN in slices where a NaN is present.
To compute the median or percentile while ignoring invalid values use the
new nanmedian or nanpercentile functions.

All functions from numpy.testing were once available from
numpy.ma.testutils but not all of them were redefined to work with masked
arrays. Most of those functions have now been removed from
numpy.ma.testutils with a small subset retained in order to preserve
backward compatibility. In the long run this should help avoid mistaken use
of the wrong functions, but it may cause import problems for some.

Previously customization of compilation of dependency libraries and numpy
itself was only accomblishable via code changes in the distutils package.
Now numpy.distutils reads in the following extra flags from each group of the
site.cfg:

By passing –parallel=n or -j n to setup.py build the compilation of
extensions is now performed in n parallel processes.
The parallelization is limited to files within one extension so projects using
Cython will not profit because it builds extensions from single files.

A max_rows argument has been added to genfromtxt to limit the
number of rows read in a single call. Using this functionality, it is
possible to read in multiple arrays stored in a single file by making
repeated calls to the function.

np.broadcast_to manually broadcasts an array to a given shape according to
numpy’s broadcasting rules. The functionality is similar to broadcast_arrays,
which in fact has been rewritten to use broadcast_to internally, but only a
single array is necessary.

When Python emits a warning, it records that this warning has been emitted in
the module that caused the warning, in a module attribute
__warningregistry__. Once this has happened, it is not possible to emit
the warning again, unless you clear the relevant entry in
__warningregistry__. This makes is hard and fragile to test warnings,
because if your test comes after another that has already caused the warning,
you will not be able to emit the warning or test it. The context manager
clear_and_catch_warnings clears warnings from the module registry on entry
and resets them on exit, meaning that warnings can be re-raised.

The fweights and aweights arguments add new functionality to
covariance calculations by applying two types of weighting to observation
vectors. An array of fweights indicates the number of repeats of each
observation vector, and an array of aweights provides their relative
importance or probability.

Python 3.5 adds support for a matrix multiplication operator ‘@’ proposed
in PEP465. Preliminary support for that has been implemented, and an
equivalent function matmul has also been added for testing purposes and
use in earlier Python versions. The function is preliminary and the order
and number of its optional arguments can be expected to change.

The default normalization has the direct transforms unscaled and the inverse
transforms are scaled by . It is possible to obtain unitary
transforms by setting the keyword argument norm to "ortho" (default is
None) so that both direct and inverse transforms will be scaled by
.

np.digitize is now implemented in terms of np.searchsorted. This means
that a binary search is used to bin the values, which scales much better
for larger number of bins than the previous linear search. It also removes
the requirement for the input array to be 1-dimensional.

np.poly will now cast 1-dimensional input arrays of integer type to double
precision floating point, to prevent integer overflow when computing the monic
polynomial. It is still possible to obtain higher precision results by
passing in an array of object type, filled e.g. with Python ints.

np.interp now has a new parameter period that supplies the period of the
input data xp. In such case, the input data is properly normalized to the
given period and one end point is added to each extremity of xp in order to
close the previous and the next period cycles, resulting in the correct
interpolation behavior.

np.genfromtxt now correctly handles integers larger than 2**31-1 on
32-bit systems and larger than 2**63-1 on 64-bit systems (it previously
crashed with an OverflowError in these cases). Integers larger than
2**63-1 are converted to floating-point values.

Built-in assumptions that the baseclass behaved like a plain array are being
removed. In particular, setting and getting elements and ranges will respect
baseclass overrides of __setitem__ and __getitem__, and arithmetic
will respect overrides of __add__, __sub__, etc.

Inputs to generalized universal functions are now more strictly checked
against the function’s signature: all core dimensions are now required to
be present in input arrays; core dimensions with the same label must have
the exact same size; and output core dimension’s must be specified, either
by a same label input core dimension or by a passed-in output array.

Normally, comparison operations on arrays perform elementwise
comparisons and return arrays of booleans. But in some corner cases,
especially involving strings are structured dtypes, NumPy has
historically returned a scalar instead. For example:

Continuing work started in 1.9, in 1.10 these comparisons will now
raise FutureWarning or DeprecationWarning, and in the future
they will be modified to behave more consistently with other
comparison operations, e.g.:

The values for the bias and ddof arguments to the corrcoef
function canceled in the division implied by the correlation coefficient and
so had no effect on the returned values.

We now deprecate these arguments to corrcoef and the masked array version
ma.corrcoef.

Because we are deprecating the bias argument to ma.corrcoef, we also
deprecate the use of the allow_masked argument as a positional argument,
as its position will change with the removal of bias. allow_masked
will in due course become a keyword-only argument.

Since 1.6, creating a dtype object from its string representation, e.g.
'f4', would issue a deprecation warning if the size did not correspond
to an existing type, and default to creating a dtype of the default size
for the type. Starting with this release, this will now raise a TypeError.

The only exception is object dtypes, where both 'O4' and 'O8' will
still issue a deprecation warning. This platform-dependent representation
will raise an error in the next release.

In preparation for this upcoming change, the string representation of an
object dtype, i.e. np.dtype(object).str, no longer includes the item
size, i.e. will return '|O' instead of '|O4' or '|O8' as
before.

In previous numpy versions operations involving floating point scalars
containing special values NaN, Inf and -Inf caused the result
type to be at least float64. As the special values can be represented
in the smallest available floating point type, the upcast is not performed
anymore.

For example the dtype of:

np.array([1.],dtype=np.float32)*float('nan')

now remains float32 instead of being cast to float64.
Operations involving non-special values have not been changed.

If given more than one percentile to compute numpy.percentile returns an
array instead of a list. A single percentile still returns a scalar. The
array is equivalent to converting the list returned in older versions
to an array via np.array.

If the overwrite_input option is used the input is only partially
instead of fully sorted.

This may cause problems with folks who depended on the polynomial classes
being derived from PolyBase. They are now all derived from the abstract
base class ABCPolyBase. Strictly speaking, there should be a deprecation
involved, but no external code making use of the old baseclass could be
found.

A bug in one of the algorithms to generate a binomial random variate has
been fixed. This change will likely alter the number of random draws
performed, and hence the sequence location will be different after a
call to distribution.c::rk_binomial_btpe. Any tests which rely on the RNG
being in a known state should be checked and/or updated as a result.

np.random.seed and np.random.RandomState now throw a ValueError
if the seed cannot safely be converted to 32 bit unsigned integers.
Applications that now fail can be fixed by masking the higher 32 bit values to
zero: seed=seed&0xFFFFFFFF. This is what is done silently in older
versions so the random stream remains the same.

The out argument to np.argmin and np.argmax and their
equivalent C-API functions is now checked to match the desired output shape
exactly. If the check fails a ValueError instead of TypeError is
raised.

The NumPy indexing has seen a complete rewrite in this version. This makes
most advanced integer indexing operations much faster and should have no
other implications. However some subtle changes and deprecations were
introduced in advanced indexing operations:

Boolean indexing into scalar arrays will always return a new 1-d array.
This means that array(1)[array(True)] gives array([1]) and
not the original array.

Advanced indexing into one dimensional arrays used to have
(undocumented) special handling regarding repeating the value array in
assignments when the shape of the value array was too small or did not
match. Code using this will raise an error. For compatibility you can
use arr.flat[index]=values, which uses the old code branch. (for
example a=np.ones(10);a[np.arange(10)]=[1,2,3])

The iteration order over advanced indexes used to be always C-order.
In NumPy 1.9. the iteration order adapts to the inputs and is not
guaranteed (with the exception of a single advanced index which is
never reversed for compatibility reasons). This means that the result
is undefined if multiple values are assigned to the same element. An
example for this is arr[[0,0],[1,1]]=[1,2], which may set
arr[0,1] to either 1 or 2.

Equivalent to the iteration order, the memory layout of the advanced
indexing result is adapted for faster indexing and cannot be predicted.

All indexing operations return a view or a copy. No indexing operation
will return the original array object. (For example arr[...])

In the future Boolean array-likes (such as lists of python bools) will
always be treated as Boolean indexes and Boolean scalars (including
python True) will be a legal boolean index. At this time, this is
already the case for scalar arrays to allow the general
positive=a[a>0] to work when a is zero dimensional.

In NumPy 1.8 it was possible to use array(True) and
array(False) equivalent to 1 and 0 if the result of the operation
was a scalar. This will raise an error in NumPy 1.9 and, as noted
above, treated as a boolean index in the future.

All non-integer array-likes are deprecated, object arrays of custom
integer like objects may have to be cast explicitly.

The error reporting for advanced indexing is more informative, however
the error type has changed in some cases. (Broadcasting errors of
indexing arrays are reported as IndexError)

promote_types function now returns a valid string length when given an
integer or float dtype as one argument and a string dtype as another
argument. Previously it always returned the input string dtype, even if it
wasn’t long enough to store the max integer/float value converted to a
string.

can_cast function now returns False in “safe” casting mode for
integer/float dtype and string dtype if the string dtype length is not long
enough to store the max integer/float value converted to a string.
Previously can_cast in “safe” mode returned True for integer/float
dtype and a string dtype of any length.

The astype method now returns an error if the string dtype to cast to
is not long enough in “safe” casting mode to hold the max value of
integer/float array that is being casted. Previously the casting was
allowed even if the result was truncated.

The unused simple_capsule_dtor function has been removed from
npy_3kcompat.h. Note that this header is not meant to be used outside
of numpy; other projects should be using their own copy of this file when
needed.

When directly accessing the sq_item or sq_ass_item PyObject slots
for item getting, negative indices will not be supported anymore.
PySequence_GetItem and PySequence_SetItem however fix negative
indices so that they can be used there.

When NpyIter_RemoveAxis is now called, the iterator range will be reset.

When a multi index is being tracked and an iterator is not buffered, it is
possible to use NpyIter_RemoveAxis. In this case an iterator can shrink
in size. Because the total size of an iterator is limited, the iterator
may be too large before these calls. In this case its size will be set to -1
and an error issued not at construction time but when removing the multi
index, setting the iterator range, or getting the next function.

This has no effect on currently working code, but highlights the necessity
of checking for an error return if these conditions can occur. In most
cases the arrays being iterated are as large as the iterator so that such
a problem cannot occur.

np.percentile now has the interpolation keyword argument to specify in
which way points should be interpolated if the percentiles fall between two
values. See the documentation for the available options.

np.median and np.percentile now support generalized axis arguments like
ufunc reductions do since 1.7. One can now say axis=(index, index) to pick a
list of axes for the reduction. The keepdims keyword argument was also
added to allow convenient broadcasting to arrays of the original shape.

The numpy storage format 1.0 only allowed the array header to have a total size
of 65535 bytes. This can be exceeded by structured arrays with a large number
of columns. A new format 2.0 has been added which extends the header size to 4
GiB. np.save will automatically save in 2.0 format if the data requires it,
else it will always use the more compatible 1.0 format.

np.cross now properly broadcasts its two input arrays, even if they
have different number of dimensions. In earlier versions this would result
in either an error being raised, or wrong results computed.

Pairwise summation is now used in the sum method, but only along the fast
axis and for groups of the values <= 8192 in length. This should also
improve the accuracy of var and std in some common cases.

For the built-in numeric types, np.searchsorted no longer relies on the
data type’s compare function to perform the search, but is now
implemented by type specific functions. Depending on the size of the
inputs, this can result in performance improvements over 2x.

Set numpy.distutils.system_info.system_info.verbosity=0 and then
calls to numpy.distutils.system_info.get_info('blas_opt') will not
print anything on the output. This is mostly for other packages using
numpy.distutils.

The polynomial classes have been refactored to use an abstract base class
rather than a template in order to implement a common interface. This makes
importing the polynomial package faster as the classes do not need to be
compiled on import.

Several more functions now release the Global Interpreter Lock allowing more
efficient parallelization using the threading module. Most notably the GIL is
now released for fancy indexing, np.where and the random module now
uses a per-state lock instead of the GIL.

The integer and empty input to select is deprecated. In the future only
boolean arrays will be valid conditions and an empty condlist will be
considered an input error instead of returning the default.

The utility function npy_PyFile_Dup and npy_PyFile_DupClose are broken by the
internal buffering python 3 applies to its file objects.
To fix this two new functions npy_PyFile_Dup2 and npy_PyFile_DupClose2 are
declared in npy_3kcompat.h and the old functions are deprecated.
Due to the fragile nature of these functions it is recommended to instead use
the python API when possible.

When NpyIter_RemoveAxis is now called, the iterator range will be reset.

When a multi index is being tracked and an iterator is not buffered, it is
possible to use NpyIter_RemoveAxis. In this case an iterator can shrink
in size. Because the total size of an iterator is limited, the iterator
may be too large before these calls. In this case its size will be set to -1
and an error issued not at construction time but when removing the multi
index, setting the iterator range, or getting the next function.

This has no effect on currently working code, but highlights the necessity
of checking for an error return if these conditions can occur. In most
cases the arrays being iterated are as large as the iterator so that such
a problem cannot occur.

Set numpy.distutils.system_info.system_info.verbosity=0 and then
calls to numpy.distutils.system_info.get_info('blas_opt') will not
print anything on the output. This is mostly for other packages using
numpy.distutils.

The utility function npy_PyFile_Dup and npy_PyFile_DupClose are broken by the
internal buffering python 3 applies to its file objects.
To fix this two new functions npy_PyFile_Dup2 and npy_PyFile_DupClose2 are
declared in npy_3kcompat.h and the old functions are deprecated.
Due to the fragile nature of these functions it is recommended to instead use
the python API when possible.

The doc/sphinxext content has been moved into its own github repository,
and is included in numpy as a submodule. See the instructions in
doc/HOWTO_BUILD_DOCS.rst.txt for how to access the content.

The hash function of numpy.void scalars has been changed. Previously the
pointer to the data was hashed as an integer. Now, the hash function uses
the tuple-hash algorithm to combine the hash functions of the elements of
the scalar, but only if the scalar is read-only.

Numpy has switched its build system to using ‘separate compilation’ by
default. In previous releases this was supported, but not default. This
should produce the same results as the old system, but if you’re trying to
do something complicated like link numpy statically or using an unusual
compiler, then it’s possible you will encounter problems. If so, please
file a bug and as a temporary workaround you can re-enable the old build
system by exporting the shell variable NPY_SEPARATE_COMPILATION=0.

For the AdvancedNew iterator the oa_ndim flag should now be -1 to indicate
that no op_axes and itershape are passed in. The oa_ndim==0
case, now indicates a 0-D iteration and op_axes being NULL and the old
usage is deprecated. This does not effect the NpyIter_New or
NpyIter_MultiNew functions.

The functions nanargmin and nanargmax now return np.iinfo[‘intp’].min for
the index in all-NaN slices. Previously the functions would raise a ValueError
for array returns and NaN for scalar returns.

There is a new compile time environment variable
NPY_RELAXED_STRIDES_CHECKING. If this variable is set to 1, then
numpy will consider more arrays to be C- or F-contiguous – for
example, it becomes possible to have a column vector which is
considered both C- and F-contiguous simultaneously. The new definition
is more accurate, allows for faster code that makes fewer unnecessary
copies, and simplifies numpy’s code internally. However, it may also
break third-party libraries that make too-strong assumptions about the
stride values of C- and F-contiguous arrays. (It is also currently
known that this breaks Cython code using memoryviews, which will be
fixed in Cython.) THIS WILL BECOME THE DEFAULT IN A FUTURE RELEASE, SO
PLEASE TEST YOUR CODE NOW AGAINST NUMPY BUILT WITH:

NPY_RELAXED_STRIDES_CHECKING=1pythonsetup.pyinstall

You can check whether NPY_RELAXED_STRIDES_CHECKING is in effect by
running:

np.ones((10,1),order="C").flags.f_contiguous

This will be True if relaxed strides checking is enabled, and
False otherwise. The typical problem we’ve seen so far is C code
that works with C-contiguous arrays, and assumes that the itemsize can
be accessed by looking at the last element in the PyArray_STRIDES(arr)
array. When relaxed strides are in effect, this is not true (and in
fact, it never was true in some corner cases). Instead, use
PyArray_ITEMSIZE(arr).

For more information check the “Internal memory layout of an ndarray”
section in the documentation.

Binary operations of the form <array-or-subclass>*<non-array-subclass>
where <non-array-subclass> declares an __array_priority__ higher than
that of <array-or-subclass> will now unconditionally return
NotImplemented, giving <non-array-subclass> a chance to handle the
operation. Previously, NotImplemented would only be returned if
<non-array-subclass> actually implemented the reversed operation, and after
a (potentially expensive) array conversion of <non-array-subclass> had been
attempted. (bug, pull request)

The npv function had a bug. Contrary to what the documentation stated, it
summed from indexes 1 to M instead of from 0 to M-1. The
fix changes the returned value. The mirr function called the npv function,
but worked around the problem, so that was also fixed and the return value
of the mirr function remains unchanged.

The function at has been added to ufunc objects to allow in place
ufuncs with no buffering when fancy indexing is used. For example, the
following will increment the first and second items in the array, and will
increment the third item twice: numpy.add.at(arr,[0,1,2,2],1)

This is what many have mistakenly thought arr[[0,1,2,2]]+=1 would do,
but that does not work as the incremented value of arr[2] is simply copied
into the third slot in arr twice, not incremented twice.

A partition by index k moves the k smallest element to the front of
an array. All elements before k are then smaller or equal than the value
in position k and all elements following k are then greater or equal
than the value in position k. The ordering of the values within these
bounds is undefined.
A sequence of indices can be provided to sort all of them into their sorted
position at once iterative partitioning.
This can be used to efficiently obtain order statistics like median or
percentiles of samples.
partition has a linear time complexity of O(n) while a full sort has
O(nlog(n)).

New modes ‘complete’, ‘reduced’, and ‘raw’ have been added to the qr
factorization and the old ‘full’ and ‘economic’ modes are deprecated.
The ‘reduced’ mode replaces the old ‘full’ mode and is the default as was
the ‘full’ mode, so backward compatibility can be maintained by not
specifying the mode.

The ‘complete’ mode returns a full dimensional factorization, which can be
useful for obtaining a basis for the orthogonal complement of the range
space. The ‘raw’ mode returns arrays that contain the Householder
reflectors and scaling factors that can be used in the future to apply q
without needing to convert to a matrix. The ‘economic’ mode is simply
deprecated, there isn’t much use for it and it isn’t any more efficient
than the ‘raw’ mode.

It is now possible to use np.newaxis/None together with index
arrays instead of only in simple indices. This means that
array[np.newaxis,[0,1]] will now work as expected and select the first
two rows while prepending a new axis to the array.

New ufuncs can now be registered with builtin input types and a custom
output type. Before this change, NumPy wouldn’t be able to find the right
ufunc loop function when the ufunc was called from Python, because the ufunc
loop signature matching logic wasn’t looking at the output operand type.
Now the correct ufunc loop is found, as long as the user provides an output
argument with the correct output type.

The pad function has a new implementation, greatly improving performance for
all inputs except mode= (retained for backwards compatibility).
Scaling with dimensionality is dramatically improved for rank >= 4.

isnan, isinf, isfinite and byteswap have been improved to take
advantage of compiler builtins to avoid expensive calls to libc.
This improves performance of these operations by about a factor of two on gnu
libc systems.

Several functions have been optimized to make use of SSE2 CPU SIMD instructions.

Float32 and float64:

base math (add, subtract, divide, multiply)

sqrt

minimum/maximum

absolute

Bool:

logical_or

logical_and

logical_not

This improves performance of these operations up to 4x/2x for float32/float64
and up to 10x for bool depending on the location of the data in the CPU caches.
The performance gain is greatest for in-place operations.

In order to use the improved functions the SSE2 instruction set must be enabled
at compile time. It is enabled by default on x86_64 systems. On x86_32 with a
capable CPU it must be enabled by passing the appropriate flag to the CFLAGS
build variable (-msse2 with gcc).

median is now implemented in terms of partition instead of sort which
reduces its time complexity from O(n log(n)) to O(n).
If used with the overwrite_input option the array will now only be partially
sorted instead of fully sorted.

Previously, negative indices and indices that pointed past the end of
the array were simply ignored. Now, this will raise a Future or Deprecation
Warning. In the future they will be treated like normal indexing treats
them – negative indices will wrap around, and out-of-bound indices will
generate an error.

Previously, boolean indices were treated as if they were integers (always
referring to either the 0th or 1st item in the array). In the future, they
will be treated as masks. In this release, they raise a FutureWarning
warning of this coming change.

In Numpy 1.7. np.insert already allowed the syntax
np.insert(arr, 3, [1,2,3]) to insert multiple items at a single position.
In Numpy 1.8. this is also possible for np.insert(arr, [3], [1, 2, 3]).

The PyArray_Type instance creation function tp_new now
uses tp_basicsize to determine how much memory to allocate.
In previous releases only sizeof(PyArrayObject) bytes of
memory were allocated, often requiring C-API subtypes to
reimplement tp_new.

The use of non-integer for indices and most integer arguments has been
deprecated. Previously float indices and function arguments such as axes or
shapes were truncated to integers without warning. For example
arr.reshape(3., -1) or arr[0.] will trigger a deprecation warning in
NumPy 1.8., and in some future version of NumPy they will raise an error.

In a future version of numpy, the functions np.diag, np.diagonal, and the
diagonal method of ndarrays will return a view onto the original array,
instead of producing a copy as they do now. This makes a difference if you
write to the array returned by any of these functions. To facilitate this
transition, numpy 1.7 produces a FutureWarning if it detects that you may
be attempting to write to such an array. See the documentation for
np.diagonal for details.

Similar to np.diagonal above, in a future version of numpy, indexing a
record array by a list of field names will return a view onto the original
array, instead of producing a copy as they do now. As with np.diagonal,
numpy 1.7 produces a FutureWarning if it detects that you may be attempting
to write to such an array. See the documentation for array indexing for
details.

In a future version of numpy, the default casting rule for UFunc out=
parameters will be changed from ‘unsafe’ to ‘same_kind’. (This also applies
to in-place operations like a += b, which is equivalent to np.add(a, b,
out=a).) Most usages which violate the ‘same_kind’ rule are likely bugs, so
this change may expose previously undetected errors in projects that depend
on NumPy. In this version of numpy, such usages will continue to succeed,
but will raise a DeprecationWarning.

Full-array boolean indexing has been optimized to use a different,
optimized code path. This code path should produce the same results,
but any feedback about changes to your code would be appreciated.

Attempting to write to a read-only array (one with arr.flags.writeable
set to False) used to raise either a RuntimeError, ValueError, or
TypeError inconsistently, depending on which code path was taken. It now
consistently raises a ValueError.

The <ufunc>.reduce functions evaluate some reductions in a different order
than in previous versions of NumPy, generally providing higher performance.
Because of the nature of floating-point arithmetic, this may subtly change
some results, just as linking NumPy to a different BLAS implementations
such as MKL can.

If upgrading from 1.5, then generally in 1.6 and 1.7 there have been
substantial code added and some code paths altered, particularly in the
areas of type resolution and buffered iteration over universal functions.
This might have an impact on your code particularly if you relied on
accidental behavior in the past.

Any ufunc.reduce function call, as well as other reductions like sum, prod,
any, all, max and min support the ability to choose a subset of the axes to
reduce over. Previously, one could say axis=None to mean all the axes or
axis=# to pick a single axis. Now, one can also say axis=(#,#) to pick a
list of axes for reduction.

There is a new keepdims= parameter, which if set to True, doesn’t throw
away the reduction axes but instead sets them to have size one. When this
option is set, the reduction result will broadcast correctly to the
original operand which was reduced.

Axis keywords have been added to the integration and differentiation
functions and a tensor keyword was added to the evaluation functions.
These additions allow multi-dimensional coefficient arrays to be used in
those functions. New functions for evaluating 2-D and 3-D coefficient
arrays on grids or sets of points were added together with 2-D and 3-D
pseudo-Vandermonde matrices that can be used for fitting.

New function PyArray_RequireWriteable provides a consistent interface
for checking array writeability – any C code which works with arrays whose
WRITEABLE flag is not known to be True a priori, should make sure to call
this function before writing.

The function np.concatenate tries to match the layout of its input arrays.
Previously, the layout did not follow any particular reason, and depended
in an undesirable way on the particular axis chosen for concatenation. A
bug was also fixed which silently allowed out of bounds axis arguments.

The ufuncs logical_or, logical_and, and logical_not now follow Python’s
behavior with object arrays, instead of trying to call methods on the
objects. For example the expression (3 and ‘test’) produces the string
‘test’, and now np.logical_and(np.array(3, ‘O’), np.array(‘test’, ‘O’))
produces ‘test’ as well.

The .base attribute on ndarrays, which is used on views to ensure that the
underlying array owning the memory is not deallocated prematurely, now
collapses out references when you have a view-of-a-view. For example:

a=np.arange(10)b=a[1:]c=b[1:]

In numpy 1.6, c.base is b, and c.base.base is a. In numpy 1.7,
c.base is a.

To increase backwards compatibility for software which relies on the old
behaviour of .base, we only ‘skip over’ objects which have exactly the same
type as the newly created view. This makes a difference if you use ndarray
subclasses. For example, if we have a mix of ndarray and matrix objects
which are all views on the same original ndarray:

a=np.arange(10)b=np.asmatrix(a)c=b[0,1:]d=c[0,1:]

then d.base will be b. This is because d is a matrix object,
and so the collapsing process only continues so long as it encounters other
matrix objects. It considers c, b, and a in that order, and
b is the last entry in that list which is a matrix object.

Casting rules have undergone some changes in corner cases, due to the
NA-related work. In particular for combinations of scalar+scalar:

the longlong type (q) now stays longlong for operations with any other
number (? b h i l q p B H I), previously it was cast as int_ (l). The
ulonglong type (Q) now stays as ulonglong instead of uint (L).

the timedelta64 type (m) can now be mixed with any integer type (b h i l
q p B H I L Q P), previously it raised TypeError.

For array + scalar, the above rules just broadcast except the case when
the array and scalars are unsigned/signed integers, then the result gets
converted to the array type (of possibly larger size) as illustrated by the
following examples:

Direct access to the fields of PyArrayObject* has been deprecated. Direct
access has been recommended against for many releases. Expect similar
deprecations for PyArray_Descr* and other core objects in the future as
preparation for NumPy 2.0.

The macros in old_defines.h are deprecated and will be removed in the next
major release (>= 2.0). The sed script tools/replace_old_macros.sed can be
used to replace these macros with the newer versions.

You can test your code against the deprecated C API by #defining
NPY_NO_DEPRECATED_API to the target version number, for example
NPY_1_7_API_VERSION, before including any NumPy headers.

The NPY_CHAR member of the NPY_TYPES enum is deprecated and will be
removed in NumPy 1.8. See the discussion at
gh-2801 for more details.

This is a bugfix release in the 1.6.x series. Due to the delay of the NumPy
1.7.0 release, this release contains far more fixes than a regular NumPy bugfix
release. It also includes a number of documentation and build improvements.

This release adds support for the IEEE 754-2008 binary16 format, available as
the data type numpy.half. Within Python, the type behaves similarly to
float or double, and C extensions can add support for it with the exposed
half-float API.

A new iterator has been added, replacing the functionality of the
existing iterator and multi-iterator with a single object and API.
This iterator works well with general memory layouts different from
C or Fortran contiguous, and handles both standard NumPy and
customized broadcasting. The buffering, automatic data type
conversion, and optional output parameters, offered by
ufuncs but difficult to replicate elsewhere, are now exposed by this
iterator.

Extend the number of polynomials available in the polynomial package. In
addition, a new window attribute has been added to the classes in
order to specify the range the domain maps to. This is mostly useful
for the Laguerre, Hermite, and HermiteE polynomials whose natural domains
are infinite and provides a more intuitive way to get the correct mapping
of values without playing unnatural tricks with the domain.

F2py now supports wrapping Fortran 90 routines that use assumed shape
arrays. Before such routines could be called from Python but the
corresponding Fortran routines received assumed shape arrays as zero
length arrays which caused unpredicted results. Thanks to Lorenz
Hüdepohl for pointing out the correct way to interface routines with
assumed shape arrays.

In addition, f2py supports now automatic wrapping of Fortran routines
that use two argument size function in dimension specifications.

numpy.ravel_multi_index : Converts a multi-index tuple into
an array of flat indices, applying boundary modes to the indices.

numpy.einsum : Evaluate the Einstein summation convention. Using the
Einstein summation convention, many common multi-dimensional array operations
can be represented in a simple fashion. This function provides a way compute
such summations.

numpy.count_nonzero : Counts the number of non-zero elements in an array.

numpy.result_type and numpy.min_scalar_type : These functions expose
the underlying type promotion used by the ufuncs and other operations to
determine the types of outputs. These improve upon the numpy.common_type
and numpy.mintypecode which provide similar functionality but do
not match the ufunc implementation.

The testing framework gained numpy.testing.assert_allclose, which provides
a more convenient way to compare floating point arrays than
assert_almost_equal, assert_approx_equal and assert_array_almost_equal.

In addition to the APIs for the new iterator and half data type, a number
of other additions have been made to the C API. The type promotion
mechanism used by ufuncs is exposed via PyArray_PromoteTypes,
PyArray_ResultType, and PyArray_MinScalarType. A new enumeration
NPY_CASTING has been added which controls what types of casts are
permitted. This is used by the new functions PyArray_CanCastArrayTo
and PyArray_CanCastTypeTo. A more flexible way to handle
conversion of arbitrary python objects into arrays is exposed by
PyArray_GetArrayParamsFromObject.

Note that the Numpy testing framework relies on nose, which does not have a
Python 3 compatible release yet. A working Python 3 branch of nose can be found
at http://bitbucket.org/jpellerin/nose3/ however.

The new buffer protocol described by PEP 3118 is fully supported in this
version of Numpy. On Python versions >= 2.6 Numpy arrays expose the buffer
interface, and array(), asarray() and other functions accept new-style buffers
as input.

The slogdet function returns the sign and logarithm of the determinant
of a matrix. Because the determinant may involve the product of many
small/large values, the result is often more accurate than that obtained
by simple multiplication.

The new header file ndarraytypes.h contains the symbols from
ndarrayobject.h that do not depend on the PY_ARRAY_UNIQUE_SYMBOL and
NO_IMPORT/_ARRAY macros. Broadly, these symbols are types, typedefs,
and enumerations; the array function calls are left in
ndarrayobject.h. This allows users to include array-related types and
enumerations without needing to concern themselves with the macro
expansions and their side- effects.

An __array_prepare__ method has been added to ndarray to provide subclasses
greater flexibility to interact with ufuncs and ufunc-like functions. ndarray
already provided __array_wrap__, which allowed subclasses to set the array type
for the result and populate metadata on the way out of the ufunc (as seen in
the implementation of MaskedArray). For some applications it is necessary to
provide checks and populate metadata on the way in. __array_prepare__ is
therefore called just after the ufunc has initialized the output array but
before computing the results and populating it. This way, checks can be made
and errors raised before operations which may modify data in place.

Previously, if an extension was built against a version N of NumPy, and used on
a system with NumPy M < N, the import_array was successful, which could cause
crashes because the version M does not have a function in N. Starting from
NumPy 1.4.0, this will cause a failure in import_array, so the error will be
caught early on.

A new neighborhood iterator has been added to the C API. It can be used to
iterate over the items in a neighborhood of an array, and can handle boundaries
conditions automatically. Zero and one padding are available, as well as
arbitrary constant value, mirror and circular padding.

New modules chebyshev and polynomial have been added. The new polynomial module
is not compatible with the current polynomial support in numpy, but is much
like the new chebyshev module. The most noticeable difference to most will
be that coefficients are specified from low to high power, that the low
level functions do not work with the Chebyshev and Polynomial classes as
arguments, and that the Chebyshev and Polynomial classes include a domain.
Mapping between domains is a linear substitution and the two classes can be
converted one to the other, allowing, for instance, a Chebyshev series in
one domain to be expanded as a polynomial in another domain. The new classes
should generally be used instead of the low level functions, the latter are
provided for those who wish to build their own classes.

The new modules are not automatically imported into the numpy namespace,
they must be explicitly brought in with an “import numpy.polynomial”
statement.

PyArray_GetNDArrayCFeatureVersion: return the API version of the
loaded numpy.

PyArray_Correlate2 - like PyArray_Correlate, but implements the usual
definition of correlation. Inputs are not swapped, and conjugate is
taken for complex arrays.

PyArray_NeighborhoodIterNew - a new iterator to iterate over a
neighborhood of a point, with automatic boundaries handling. It is
documented in the iterators section of the C-API reference, and you can
find some examples in the multiarray_test.c.src file in numpy.core.

deprecated decorator: this decorator may be used to avoid cluttering
testing output while testing DeprecationWarning is effectively raised by
the decorated test.

assert_array_almost_equal_nulps: new method to compare two arrays of
floating point values. With this function, two values are considered
close if there are not many representable floating point values in
between, thus being more robust than assert_array_almost_equal when the
values fluctuate a lot.

assert_array_max_ulp: raise an assertion if there are more than N
representable numbers between two floating point values.

assert_warns: raise an AssertionError if a callable does not generate a
warning of the appropriate class, without altering the warning state.

In 1.3.0, we started putting portable C math routines in npymath library, so
that people can use those to write portable extensions. Unfortunately, it was
not possible to easily link against this library: in 1.4.0, support has been
added to numpy.distutils so that 3rd party can reuse this library. See coremath
documentation for more information.

In previous versions of NumPy some set functions (intersect1d,
setxor1d, setdiff1d and setmember1d) could return incorrect results if
the input arrays contained duplicate items. These now work correctly
for input arrays with duplicates. setmember1d has been renamed to
in1d, as with the change to accept arrays with duplicates it is
no longer a set operation, and is conceptually similar to an
elementwise version of the Python operator ‘in’. All of these
functions now accept the boolean keyword assume_unique. This is False
by default, but can be set True if the input arrays are known not
to contain duplicates, which can increase the functions’ execution
speed.

correlate: it takes a new keyword argument old_behavior. When True (the
default), it returns the same result as before. When False, compute the
conventional correlation, and take the conjugate for complex arrays. The
old behavior will be removed in NumPy 1.5, and raises a
DeprecationWarning in 1.4.

unique1d: use unique instead. unique1d raises a deprecation
warning in 1.4, and will be removed in 1.5.

intersect1d_nu: use intersect1d instead. intersect1d_nu raises
a deprecation warning in 1.4, and will be removed in 1.5.

setmember1d: use in1d instead. setmember1d raises a deprecation
warning in 1.4, and will be removed in 1.5.

The following raise errors:

When operating on 0-d arrays, numpy.max and other functions accept
only axis=0, axis=-1 and axis=None. Using an out-of-bounds
axes is an indication of a bug, so Numpy raises an error for these cases
now.

Specifying axis>MAX_DIMS is no longer allowed; Numpy raises now an
error instead of behaving similarly as for axis=None.

By default, every file of multiarray (and umath) is merged into one for
compilation as was the case before, but if NPY_SEPARATE_COMPILATION env
variable is set to a non-negative value, experimental individual compilation of
each file is enabled. This makes the compile/debug cycle much faster when
working on core numpy.

There is a general need for looping over not only functions on scalars but also
over functions on vectors (or arrays), as explained on
http://scipy.org/scipy/numpy/wiki/GeneralLoopingFunctions. We propose to
realize this concept by generalizing the universal functions (ufuncs), and
provide a C implementation that adds ~500 lines to the numpy code base. In
current (specialized) ufuncs, the elementary function is limited to
element-by-element operations, whereas the generalized version supports
“sub-array” by “sub-array” operations. The Perl vector library PDL provides a
similar functionality and its terms are re-used in the following.

Each generalized ufunc has information associated with it that states what the
“core” dimensionality of the inputs is, as well as the corresponding
dimensionality of the outputs (the element-wise ufuncs have zero core
dimensions). The list of the core dimensions for all arguments is called the
“signature” of a ufunc. For example, the ufunc numpy.add has signature
“(),()->()” defining two scalar inputs and one scalar output.

Another example is (see the GeneralLoopingFunctions page) the function
inner1d(a,b) with a signature of “(i),(i)->()”. This applies the inner product
along the last axis of each input, but keeps the remaining indices intact. For
example, where a is of shape (3,5,N) and b is of shape (5,N), this will return
an output of shape (3,5). The underlying elementary function is called 3*5
times. In the signature, we specify one core dimension “(i)” for each input and
zero core dimensions “()” for the output, since it takes two 1-d arrays and
returns a scalar. By using the same name “i”, we specify that the two
corresponding dimensions should be of the same size (or one of them is of size
1 and will be broadcasted).

The dimensions beyond the core dimensions are called “loop” dimensions. In the
above example, this corresponds to (3,5).

The usual numpy “broadcasting” rules apply, where the signature determines how
the dimensions of each input/output object are split into core and loop
dimensions:

While an input array has a smaller dimensionality than the corresponding number
of core dimensions, 1’s are pre-pended to its shape. The core dimensions are
removed from all inputs and the remaining dimensions are broadcasted; defining
the loop dimensions. The output is given by the loop dimensions plus the
output core dimensions.

Float formatting is now handled by numpy instead of the C runtime: this enables
locale independent formatting, more robust fromstring and related methods.
Special values (inf and nan) are also more consistent across platforms (nan vs
IND/NaN, etc...), and more consistent with recent python formatting work (in
2.6 and later).

The maximum/minimum ufuncs now reliably propagate nans. If one of the
arguments is a nan, then nan is returned. This affects np.min/np.max, amin/amax
and the array methods max/min. New ufuncs fmax and fmin have been added to deal
with non-propagating nans.

Gfortran can now be used as a fortran compiler for numpy on windows, even when
the C compiler is Visual Studio (VS 2005 and above; VS 2003 will NOT work).
Gfortran + Visual studio does not work on windows 64 bits (but gcc + gfortran
does). It is unclear whether it will be possible to use gfortran and visual
studio at all on x64.

This should make the porting to new platforms easier, and more robust. In
particular, the configuration stage does not need to execute any code on the
target platform, which is a first step toward cross-compilation.

The core math functions (sin, cos, etc... for basic C types) have been put into
a separate library; it acts as a compatibility layer, to support most C99 maths
functions (real only for now). The library includes platform-specific fixes for
various maths functions, such as using those versions should be more robust
than using your platform functions directly. The API for existing functions is
exactly the same as the C99 math functions API; the only difference is the npy
prefix (npy_cos vs cos).

npy_cpu.h defines numpy specific CPU defines, such as NPY_CPU_X86, etc...
Those are portable across OS and toolchains, and set up when the header is
parsed, so that they can be safely used even in the case of cross-compilation
(the values is not set when numpy is built), or for multi-arch binaries (e.g.
fat binaries on Max OS X).

npy_endian.h defines numpy specific endianness defines, modeled on the glibc
endian.h. NPY_BYTE_ORDER is equivalent to BYTE_ORDER, and one of
NPY_LITTLE_ENDIAN or NPY_BIG_ENDIAN is defined. As for CPU archs, those are set
when the header is parsed by the compiler, and as such can be used for
cross-compilation and multi-arch binaries.