Hello list,
I had been working on a mostly-python implementation of the
Levenberg-Marquardt algorithm for data fitting, which I put here:
https://github.com/scipy/scipy/pull/90
one of my main goals was to make it more flexible and usable
than the FORTRAN version we have in scipy right now. So
I took an object-oriented approach, where you inherit from a
fitter class and reimplement the function to fit. Some convenience
functions around makes this approach very simple, the
most simple version is using a deocrator, say you want
to fit your data to a gaussian, you would write:
@fitfunction(width=3, height=2, position=4)
def gaussian(x, width, height, position):
# some code to calculate gaussian here
gaussian.fit(xdata, ydata, width=2, height=1)
that's it! I would like to have some comments about it.
While working on it, I have been pointed to two different
related efforts, the first being here: http://newville.github.com/lmfit-py/
Matthew Newville wrote this trying to avoid clumsy unreadable
fitting routines like that:
def gaussian(x, p):
return p[0] * exp(-((x - p[1]) / p[2]) / 2)
He's right that that's ugly, unfortunately, I think his solution
is not much better, this is why I didnt take his route.
There also was the effort of Denis Laxalde,
https://github.com/scipy/scipy/pull/94
where he tries to unify minimization algorithms.
This was actually a very unfortunate approach: he
unifies many algorithms into one function. This function
is then nothing else then a big if-else-statement that
gives command to the algorithm in question. This
is a non-extensible approach, I cannot simply add
my super-cool-minimizer, give it a new name and
drop it in for the scipy ones, but I have to change
the scipy function to incorporate my new function.
This is why I wrote a function that looks kindof like
his minimize function, but ignores the
select-algorithm-parameter, so that one can drop
in my algorithm for the existing ones.
Greetings
Martin Teichmann