Sjoerd Visscher <sjoerd <at> w3future.com> writes:
> On Nov 2, 2009, at 5:11 PM, Will Ness wrote:
>> > Hi,
> >
> > I've run it now. In producing 100,000 primes, your above code takes
> > x3.5 more
> > time than the one I posted.
>> Too bad. The order of the primes when filtering matters quite a lot!
Yep. Most of the composites will have mostly small prime factors, most of the
time. :) An English paraphrase of the analysis from the paper, when she proves
that whatever you do with composites, doesn't matter (in the _long_ run!) if you
test primes for divisibility - because for primes we end up testing against
_all_ of the primes in the prefix. This is exactly where her PQ code achieves
its coup - it gets its primes for free when it checks its composites and sees
the gap there.
So the reason not to have these nested functions is to have all the logic
represented in the open, in the data structure - here the list, still. PQ is a
natural next step. Having a lot of linearly nested functions, there isn't a lot
we can do with them. :)
> > The code modified as I suggested with (qs++[p])
> > takes exactly the same time as mine. :)
>> Yes, append and take are both O(n).
I would hope 'append' to be O(1), and 'take' just translated into an upper limit
of the testing loop in 'and'.
>> Here's another variation which I rather like:
>> primes = 2 : 3 : sieve (tail primes) (notDivides 3) 5
> sieve (p:ps) test from
> = [x | x <- [from, from + 2 .. p * p - 2], test x]
> ++ sieve ps (\x -> test x && notDivides p x) (p * p + 2)
> notDivides d x = (x `rem` d) /= 0
>> I'm curious if GHC can compile this to the same speed as your code.
It depends on whether it is as good at traversing function call frames at run
time, as elements of a list. Or whether it uses the call frames at all, or is
able to compile away all of it.
:)