Closely related: what is the smallest known composite which has not been factored? If these numbers cannot be specified, knowing their approximate size would be interesting. E.g. can current methods factor an arbitrary 200 digit number in a few hours (days? months? or what?).
Can current methods certify that an arbitrary 1000 digit number is prime, or composite in a few hours (days? months? not at all?).

Any broad-brush comments on the current status of primality proving, and how active this field is would be appreciated as well. Same for factoring.

Edit: perhaps my header question was something of a troll. I am not interested in lists. But if anyone could shed light on the answers to the portion of my question starting with "E.g.". it would be appreciated. (I could answer it in 1990, but what is the status today?)

Since we're talking about knowledge, we might wonder whether every even number, and every multiple of 5, count as numbers that are known not to be prime, even if they are numbers that no one has ever happened to write down. If we do count those as known, what about multiples of 7, or 137? There will definitely be some clear cases, but I suspect that at most points in time, there will be some medium-largish number that no one has ever bothered to test for primality, that would take only a moderately long time to test (say, a few hours). Of course, that doesn't address your main question
–
Kenny EaswaranOct 21 '09 at 5:42

6 Answers
6

Ian, I get what you're trying to say here, but you're cheating just a little bit: any reasonable way of encoding positive integers leads to a O(1) or at worst O(log n) test to check whether n is even or odd. This isn't true for primes.
–
Harrison BrownOct 21 '09 at 7:44

I don't think this is the right question to be asking. People aren't going to store long lists of primes. You might be able to store the first 10^12 or so primes (in some compressed form) on your hard drive; but testing the first number not on your table for primality, or factoring it, would be trivial.

Kenny's point in the comments is a good one as well, and it's also worth keeping in mind that the number of atoms in the observable universe is at most a few orders of magnitude more than 10^80, so writing down all the numbers with 100 or fewer digits is a hopeless task. (Certainly we haven't checked them all for primality!) That said, I'll try to give a rather rough answer to your questions.

1 terabyte, which is a good estimate for the size of a good commercially-available hard drive, is about 2^40 bits. Testing a number that small for primality is very easy (in fact, it's not terribly slow even with naive methods), so you could quite easily and cheaply list all the primes between about 1 and 2^35. Pushing higher than that, things start to get problematic -- there's just not enough space to hold all these numbers! If you gave a 1 TB hard drive to everyone on earth, there wouldn't be enough space to list all the primes between 1 and 2^70, which has about 23 digits. If you pick a random 25-digit number, odds are that it's never been tested for primality before. But it's easy to test such numbers for primality quickly, so this isn't a problem about primes so much as it is about the fact that the exponential function is big. So what about prime testing? How fast can we do that?

In practice, the best general method for primality testing seems to be elliptic curve primality proving, probably combined with trial division by small integers. To give you some idea of how efficient it is, the biggest number that's been proved prime by ECPP methods has around 20,000 digits, and the proof took 9 months via a distributed computing project (equivalent to several years on a top-of-the-line general-purpose processor.) A rough back-of-the-envelope calculation suggests that if you want to test for primality in at most a couple of weeks on a single machine, 10000 digits is probably too much, although 1000 seems very reasonable.

Factoring seems much harder. 200 digits is reasonable with a wide distributed computing project, but 2000 is likely to be way out of reach.

Back in 1990, whatever answer would have been given to the question would likely correspond to the processing power of single (possibly large) computer. Today, the best algorithms are ones that can be efficiently distributed, leading to successful factorizations performed by networks of computers.

With the General Number Field Sieve, one can expect to factor a 200-(decimal)-digit integer in several months of computer time. See here for some records.

Primality testing - even with certificates - extends to much higher numbers, and I'm not sure what the current state of the art is. Integers of special form (Mersenne primes and similarly constructed numbers, mostly) were tested up to 2^millions or so, but I'm not sure how high you can push the tests for general integers.

Not really - the question is asking for the smallest number whose primality status isn't currently known, not the smallest number whose primality status can't be known. At every time there is a smallest number whose primality status isn't currently known, and we could even have a centralized computer that kept track of what that number was if we wanted. There's no paradox.
–
Kenny EaswaranOct 21 '09 at 5:37

I agree with Ilya -- given current processing power, this feels something like asking "What's the smallest number that has never been written down on the Internet?"
–
Tom ChurchOct 21 '09 at 6:49

Closely related: what is the smallest known composite which has not been factored?

As others have pointed out, this question can't be answered, even if you understand it as 'What is the smallest known composite none of whose factors are known?' (I can easily generate an enormous number none of whose factors are known). But there is an ongoing effort at FERMATSEARCH to factorise the Fermat numbers F_m = 2^2^m + 1, which gives you an idea of the state of the art.

Currently (October 2009) the smallest known-composite factor of a Fermat number which has no known factors is the 1187-digit number C1187 given by