On Fri, Jan 20, 2012 at 7:45 AM, Sturla Molden <sturla@molden.no> wrote:
> Den 18.01.2012 08:50, skrev Fabrice Silva:
>> Note that talkbox seems to have some stuff on Yule-Walker
>>http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/talkbox/talkbox_doc/index.html>>>> in python for educational purpose, and C for performance.
>>>> No need to use C for performance here.
>> Computing the autocovariance for Yule-Walker can be vectorized with
> np.dot, which lets BLAS do the work. Something like this:
>> def covmtx_yulewalker(x,p):
> ''' autocorrelation method '''
> x = np.ascontiguousarray(x)
> n = x.shape[0]
> Rxx = np.zeros(p+1)
> for k in range(0,p+1):
> Rxx[k] = np.dot(x[:n-k],x[k:])/(n-k-1.0)
> return Rxx
>> Later on, in the code Josef posted, the next bulk of the computation is
> done by LAPACK (linalg.lstsq).
>> With NumPy linked against optimized BLAS and LAPACK libraries (e.g. MKL,
> ACML, GotoBLAS2, Cray libsci), doing this in C might actually end up
> being slower. Don't waste your time on C before (1) NumPy is proven to
> be too slow and (2) you have good reasons to believe that C will be
> substantially faster. (NumPy users familiar with MATLAB make the latter
> assumption far too often.)
I think the main argument is that levinson-durbin uses fewer
calculations, which might matter if the AR polynomial is very large.
I've read conflicting comments about numerical stability, some argue
in favor of levinson-durbin, some in favor of least squares, but Burg
seems to be generally considered to be numerically better that either
of the other two.
Josef
>> Sturla
>> _______________________________________________
> SciPy-User mailing list
>SciPy-User@scipy.org>http://mail.scipy.org/mailman/listinfo/scipy-user