Sebastian Haase wrote:
> On Tue, Mar 16, 2010 at 11:27 AM, David Baddeley
> <david_baddeley@yahoo.com.au> wrote:
>>> I think there are a couple of factors here - python for loops and if statements are bad news performance wise, so if you can remove them using cython / c you'll gain a lot. On the other hand they tend to suffer more of a performance penalty when you've got the line profiling hooks in place than, for example numpy calls which are vectorised and doing all the heavy lifting in places which aren't visible to the profiler.
>>>> You might also be able to improve performance by doing something like:
>>>> for tracki, track in enumerate(self.tracks[self.tracks_tlast == t-1]):
>>>> if tracks and tracks_tlast are numpy arrays.
>>>> This did sound like an intriguing idea, but sadly they are all lists,
> because they are appended to as new points are one-by-one being added
> to the tracks.
>>>> Just out of interest, how many points/observations are you trying to track? I've been working on something for stitching together tracks from lists of positions as well and would be interested in comparing strategies.
>>>>>> I just finished my first analysis of many of our files. In total I'm
> talking about 300GB of image data, where each "movie" might have 100
> to few thousand image frames. At most my detection algorithms found
> few hundreds (but sometimes up to 1000) points per frame.
> The longest run took about 3 hours for up to 1400 points per frame
> over a total of 2000 frames; but here I might have also run into
> memory / swapping(?) issues. (I have 6GB memory RAM on a quad-code
> 64bit linux machine.)
>> I would really like to try cython on this innermost loop to get
> everything re-analysed in a fraction of a day.
>If using Cython, list lookup in itself can be very fast, but the
comparisons, any arithmetic you might do etc. will be slow because you
are using Python objects inside the lists.
Consider copying to over-allocated NumPy arrays (and reallocate on need).
Dag Sverre