Sebastian Haase wrote:
> Another comment:
> Performance wise it will be much faster to generate a larger array of
> rands in one go, rather than calling the python function at every
> single time step.
> Furthermore you might also want to perform as many time-steps in a
> single python-call -- look at the cumsum function.
>> Compare the times - I guess I'm talking at least about a factor 10 in speed.
Only a factor of 2. It's probably going to be swamped by anything else one does
in the loop, like testing for intersection with the boundaries of the box. I
think the convenience and clarity of calling the function is probably going to
outweigh the performance advantage of building an array.
In [14]: from numpy import *
In [15]: %timeit for i in xrange(10000): x=random.normal()
100 loops, best of 3: 4.6 ms per loop
# The next two need to be added together since %timeit doesn't allow ; and for
# loops together.
In [16]: %timeit x=random.normal(size=10000)
1000 loops, best of 3: 1.24 ms per loop
In [17]: %timeit for y in x: pass
1000 loops, best of 3: 1.64 ms per loop
# This way of iterating is even worse. Unfortunately, it's probably the one that
# would have to be used.
In [28]: %timeit for i in xrange(10000): x[i]
100 loops, best of 3: 2.39 ms per loop
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco