And I think I need optimize this function: [source lang="cpp"]void setPrimeArray(int * arr, int num){ int i = 1; int j = 0; for(; i <= num; i++) { if(isPrime(i)) { arr[j] = i; j++; } }}[/source]Because it takes a lot of time with bigger numbers in this function. It makes an array of primes up to numbers, so I can later use it to find prime factors of number. So if anyone have ideas how I can optimize my code, please share them with me.

Yea that's going to be extremely slow for large numbers. You're reserving memory for num integers when really you only need memory proportional to the number of prime factors of num at the end of the day. So if you entered 1,000,000 you're looking at almost 4 megs of RAM reserved, roughly 797 pages. As you iterate through all that, your swapping pages constantly from CPU cache to memory and visa versa. That's generally a slow(ish) operation.

While your solution certainly works, it's not terribly efficient overall in terms of memory (as you've seen). My suggestion is to go back to the drawing board and see if you can figure out how to calculate the primes without requiring so much memory. You're almost there, you're just being a little overzealous with calloc...

memory allocation isn't the problem, since that extra memory will never be used, will just sit there.
The problem is that there is so far between the big primes, and you test way too many numbers.
You could cut it in half if you only test odd numbers... (i+=2) but there is probably smarter ways...

But even better, since prime numbers never change, why not precalculate them once, save to file, and then load the prime table from file instead?

Yea disregard what I said, Olof is correct. You're not accessing the majority of pages so your memory allocation is indeed not causing the problem. I still disagree with the magnitude of the allocation though...

Not really sure how much of a hint you want here but there's an abstract data type very suited to this type of search. I won't name it, but the next paragraph contains problem spoilers (if that's such a thing...):

Each number has a pair of factors that are, themselves, numbers that are either primes or smaller numbers with pairs of factors. If you keep splitting your number this way, eventually you'll have a particular type of data structure containing a bunch of numbers at the very "bottom" (my word, not the actual term typically used) that are all prime. All you have to do at that point is pick the right one...

You don't necessarily need to store factors in a file, though there's no reason not too if you really want to do it.

The main algorithmic improvement here is diving only by numbers up to the square root of the number.

long largest_prime_factor(long n) {
// Take out all the factors of 2
while (n%2==0)
n/=2;
if (n==1)
return 2;
// Take out all the factors of 3
while (n%3==0)
n/=3;
if (n==1)
return 3;
for (long p=5; p*p<=n; p+=6) {
// Take out all the factors of p
while (n%p==0)
n/=p;
if (n==1)
return p;
long q = p+2;
// Take out all the factors of q
while (n%q==0)
n/=q;
if (n==1)
return q;
}
return n; // If there's something left that doesn't have a divisor <= sqrt(n), then n is prime and we can return it.
}

Hi. You dont need to calculate all the factors, or check whether they are prime or not. You just need to divide the number starting from 2 until you reach a prime number.
I have written something quickly and it seems to be working.

Also, trial division isn't the only way to factorize numbers. Basically, factoring is a difficult problem, and it is in fact infeasible to do it when the prime factors are larger than a few hundred digits. Fortunately, you're not dealing with such large numbers.

I would expect Project Euler to specifically choose a large number to prevent you from using trial division, forcing you to implement a better algorithm. Look into Pollard's rho algorithm, or Pollard's p + 1. There's also a continued fraction algorithm (SQUFOF) which is somewhat arcane, but works. Quadratic sieve is the next step up but is considerably harder to implement. Number Field Sieve (NFS) is state-of-the-art but good luck implementing that, seriously. You can also look into elliptic curve factorization (elliptic curve method or ECM for short) which is pretty good for average-size factors but is a bit tricky to get right and difficult to understand - you need to understand elliptic curves.

For reference, quadratic sieve can factorize a 100-digit semiprime into two 50-digit primes in four hours or so. Trial division would have done the same in, say, a few trillion years.

All of this is probably overkill for 64-bit integers though.

Edited by Bacterius, 27 October 2012 - 08:53 PM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.