Hi
I have had reproducible segmentation faults on running my code (though it
would not crash on all computers, or even on the same on two days
interval, even if I think I did not change the configuration).
After looking for a while, I think that scipy.signal.correlate2d was the
culprit.
Here I use it maybe outside its specifications, though.
Essentially I am correlating (or convolving with the same
consequences...) two input arrays, of which the first has more rows and
the second more columns, with periodicity (so flag 'wrap') and expecting
output the same size as the first array (so flag 'same').
The first behaviour I find strange from my experiments to get a minimal
error-generating example is that, if one array is bigger than the other
in both rows and columns, then we keep its shape for the "same" output. I
would have expected that we keep the shape of the first array.
In [25]: ss.correlate2d(s.ones((2,2)),s.ones((4,4)), 'same', 'wrap')
Out[25]:
array([[ 4., 4., 4., 4.],
[ 4., 4., 4., 4.],
[ 4., 4., 4., 4.],
[ 4., 4., 4., 4.]])
In [20]: ss.correlate2d(s.ones((4,4)),s.ones((2,2)), 'same', 'wrap')
Out[20]:
array([[ 4., 4., 4., 4.],
[ 4., 4., 4., 4.],
[ 4., 4., 4., 4.],
[ 4., 4., 4., 4.]])
In fact the decision seems to be based on the total number of entries. In
case of equality, the first is chosen.
More of a nuisance: when we convolve two arrays with more rows and less
columns in the first, we get completely fancy results:
In [13]: ss.convolve2d(s.ones((4,4)),s.ones((1,15)), 'same', 'wrap')
Out[13]:
array([[ 6.21107620e+223, 6.21107620e+223, 1.40000000e+001,
1.50000000e+001],
[ 1.50000000e+001, 1.50000000e+001, 1.50000000e+001,
1.50000000e+001],
[ 1.50000000e+001, 1.50000000e+001, 1.50000000e+001,
1.50000000e+001],
[ 1.50000000e+001, 1.40000000e+001, 1.30000000e+001,
1.71130456e+059]])
instead of 15 everywhere, or:
In [34]: ss.correlate2d(s.ones((5,5)),s.ones((1,24)), 'same', 'wrap')
Out[34]:
array([[ 1.68696161e+69, 1.90000000e+01, 2.00000000e+01,
2.10000000e+01, 2.20000000e+01],
[ 2.30000000e+01, 2.40000000e+01, 2.40000000e+01,
2.40000000e+01, 2.40000000e+01],
[ 2.40000000e+01, 2.40000000e+01, 2.40000000e+01,
2.40000000e+01, 2.40000000e+01],
[ 2.40000000e+01, 2.40000000e+01, 2.40000000e+01,
2.30000000e+01, 2.20000000e+01],
[ 2.10000000e+01, nan, nan,
nan, nan]])
instead of 24 everywhere.
The bigger the arrays, the more fancy the outcome...
I am currently on Xubuntu Edgy Eft (fresh install) with:
Python 2.4.4c1 within ipython 0.7.2
Scipy 0.5.1
Numpy 1.0.rc1
I have also installed Pywavelets from Pypi:
pywt 0.1.4
Hope to see whether this should be considered as a bug, or specifically
outside the scope of the function. Meanwhile I rewrite my code to avoid
such "extreme" uses of convolve2d and correlate2d...
Jonas