The method used is the Hardy-Littlewood circle method, which was for instance also used to prove Vinogradov’s theorem that every sufficiently large odd number is the sum of three primes. Let’s quickly recall how this argument works. It is convenient to use a proxy for the primes, such as the von Mangoldt function, which is mostly supported on the primes. To represent a large number as the sum of three primes, it suffices to obtain a good lower bound for the sum

By Fourier analysis, one can rewrite this sum as an integral

where

and . To control this integral, one then needs good bounds on for various values of . To do this, one first approximates by a rational with controlled denominator (using a tool such as the Dirichlet approximation theorem) . The analysis then broadly bifurcates into the major arc case when is small, and the minor arc case when is large. In the major arc case, the problem more or less boils down to understanding sums such as

which in turn is almost equivalent to understanding the prime number theorem in arithmetic progressions modulo . In the minor arc case, the prime number theorem is not strong enough to give good bounds (unless one is using some extremely strong hypotheses, such as the generalised Riemann hypothesis), so instead one uses a rather different method, using truncated versions of divisor sum identities such as to split into a collection of linear and bilinear sums that are more tractable to bound, typical examples of which (after using a particularly simple truncated divisor sum identity known as Vaughan’s identity) include the “Type I sum”

and the “Type II sum”

After using tools such as the triangle inequality or Cauchy-Schwarz inequality to eliminate arithmetic functions such as or , one ends up controlling plain exponential sums such as , which can be efficiently controlled in the minor arc case.

This argument works well when is extremely large, but starts running into problems for moderate sized , e.g. . The first issue is that of logarithmic losses in the minor arc estimates. A typical minor arc estimate takes the shape

when is close to for some . This only improves upon the trivial estimate from the prime number theorem when . As a consequence, it becomes necessary to obtain an accurate prime number theorem in arithmetic progressions with modulus as large as . However, with current technology, the error term in such theorems are quite poor (terms such as for some small are typical, and there is also a notorious “Siegel zero” problem), and as a consequence, the method is generally only applicable for very large . For instance, the best explicit result of Vinogradov type known currently is due to Liu and Wang, who established that all odd numbers larger than are the sum of three odd primes. (However, on the assumption of the GRH, the full odd Goldbach conjecture is known to be true; this is a result of Deshouillers, Effinger, te Riele, and Zinoviev.)

In this paper, we make a number of refinements to the general scheme, each one of which is individually rather modest and not all that novel, but which when added together turn out to be enough to resolve the five primes problem (though many more ideas would still be needed to tackle the three primes problem, and as is well known the circle method is very unlikely to be the route to make progress on the two primes problem). The first refinement, which is only available in the five primes case, is to take advantage of the numerical verification of the even Goldbach conjecture up to some large (we take , using a verification of Richstein, although there are now much larger values of – as high as – for which the conjecture has been verified). As such, instead of trying to represent an odd number as the sum of five primes, we can represent it as the sum of three odd primes and a natural number between and . This effectively brings us back to the three primes problem, but with the significant additional boost that one can essentially restrict the frequency variable to be of size . In practice, this eliminates all of the major arcs except for the principal arc around . This is a significant simplification, in particular avoiding the need to deal with the prime number theorem in arithmetic progressions (and all the attendant theory of L-functions, Siegel zeroes, etc.).

In a similar spirit, by taking advantage of the numerical verification of the Riemann hypothesis up to some height , and using the explicit formula relating the von Mangoldt function with the zeroes of the zeta function, one can safely deal with the principal major arc . For our specific application, we use the value , arising from the verification of the Riemann hypothesis of the first zeroes by van de Lune (unpublished) and Wedeniswki. (Such verifications have since been extended further, the latest being that the first zeroes lie on the line.)

To make the contribution of the major arc as efficient as possible, we borrow an idea from a paper of Bourgain, and restrict one of the three primes in the three-primes problem to a somewhat shorter range than the other two (of size instead of , where we take to be something like ), as this largely eliminates the “Archimedean” losses coming from trying to use Fourier methods to control convolutions on . In our paper, we set the scale parameter to be (basically, anything that is much larger than but much less than will work), but we found that an additional gain (which we ended up not using) could be obtained by averaging over a range of scales, say between and . This sort of averaging could be a useful trick in future work on Goldbach-type problems.

It remains to treat the contribution of the “minor arc” . To do this, one needs good and type estimates on the exponential sum . Plancherel’s theorem gives an estimate which loses a logarithmic factor, but it turns out that on this particular minor arc one can use tools from the theory of the large sieve (such as Montgomery’s uncertainty principle) to eliminate this logarithmic loss almost completely; it turns out that the most efficient way to do this is use an effective upper bound of Siebert on the number of prime pairs less than to obtain an bound that only loses a factor of (or of , once one cuts out the major arc).

For estimates, it turns out that existing effective versions of (1) (in particular, the bound given by Chen and Wang) are insufficient, due to the three logarithmic factors of in the bound. By using a smoothed out version of the sum , for some suitable cutoff function , one can save one factor of a logarithm, obtaining a bound of the form

with effective constants. One can improve the constants further by restricting all summations to odd integers (which barely affects , since was mostly supported on odd numbers anyway), which in practice reduces the effective constants by a factor of two or so. One can also make further improvements in the constants by using the very sharp large sieve inequality to control the “Type II” sums that arise from Vaughan’s identity, and by using integration by parts to improve the bounds on the “Type I” sums. A final gain can then be extracted by optimising the cutoff parameters appearing in Vaughan’s identity to minimise the contribution of the Type II sums (which, in practice, are the dominant term). Combining all these improvements, one ends up with bounds of the shape

when is small (say ) and

when is large (say ). (See the paper for more explicit versions of these estimates.) The point here is that the factors have been partially replaced by smaller logarithmic factors such as or . Putting together all of these improvements, one can finally obtain a satisfactory bound on the minor arc. (There are still some terms with a factor in them, but we use the effective Vinogradov theorem of Liu and Wang to upper bound by , which ends up making the remaining terms involving manageable.)

I believe anon’s point is that the title is technically ambiguous. It’s a minor point and most/all mathematicians will automatically know what was intended, because this is a sort of grammatical oddity that is actually very common in the subject’s literature. But you could interpret the title to mean “every odd integer larger than 1 cannot be written as the sum of more than five primes.”

The thing that you are talking is a different issue. The statement “41 is the sum of at most five primes” means that every odd number you ll see can be either a prime number , if not then the sum of two primes or the sum of three primes or the sum of four primes and if not even this then definitely five primes.

Nowhere its mentioning that it can’t be sum of more than five numbers. Here at most precisely means that If you want to express any odd number greater than 1 as a minimum possible sum of primes then you can express it up to at most 5 primes, that’s the simple mathematical statement!

Yes it does! “At the most” means if you keep checking you don’t need to go for 6 primes numbers, that means for every odd integer greater than 1 every number is a sum of at most five prime no.s.

If you think you are right then give a counter example of an odd number which is NOT the sum of five or less than five prime numbers. According to this theorem you don’t have any such examples. (If you have then show it here!)

This theorem is perfectly logical and its proof is very very nice. I suggest you to go see and understand the proof. It may clear your doubt!

The problem is that an odd number can be 2 things at the same time (at most of 5, and over 5).

The counterexample you pretend to give is not appropriate, since the fact that i cannot find it does not affect the fact that i can find the same number with 6 primes.

“Every odd integer larger than 1 is the sum of at most five primes”

This excludes 6. Even if the same number can be expressed at the same time with 5.

The Golbach conjecture says that an even number equal or greater than 4 is the sum of two prime numbers.

If you could show to me that an even number can be represented by the sum of 3 or more prime numbers, I would shut up.

Or any such theorems/hypothesis/conjectures well accepted in the literature where an similarity could be done with the title. It is superficially said above that such way to “talk” is common, but I have not seen any example.

A title that would have no ambiguities at all would be:

Every odd integer greater than 3 can be represented as the sum of five primes or less.

Ok. I was going to write that I would shut up forever, since that is exactly what I was asking for to understand.

BUT. Before finally giving up I checked the definition of the Goldbach conjecture in wolfram mathematica, that seems to be a reliable source. And then my surprise: they do not use “is”, the use “can be” (Exactly my point):

As re-expressed by Euler, an equivalent form of this conjecture (called the “strong” or “binary” Goldbach conjecture) asserts that all positive even integers greater or equal than 4 CAN BE expressed as the sum of two primes.

Even wikipedia does not use “IS”, uses “can be”.

If wolfram definition /wikipedia would not be “CAN BE expressed” I would understand that mathematicians agree in this misleading language. But the point is that it is not the case.

Then, I insist again. It should say “can be expressed” and not “is”.

I really doubt that the oxford university is arbitrarily substituting the “is” for “can be represented” just for fun or casuality with my arguments.

You’re missing the point. The phrase is used in the relevant literature and so it is well understood in that context. However, in standard English, “at most” would typically imply “not more than”. For example, if you can take at most 2 pieces of luggage onto a plane, it means you can’t take 3 pieces of luggage.

The analogous conclusion would be that “41 is the sum of at most 5 primes” means you can’t find 6 primes adding up to 41. That’s why the current title is slightly problematical outside of the context of the relevant literature. Obviously this isn’t actually a big deal, just a grammatical quirk.

Make that “five primes or fewer”, not “less”. That said, a concise but somewhat imprecise title is often better than a much longer pedantically correct version. The title benefits from the context of the paper itself, but also of the very rich literature in this area.

It is not about to be pedantic, it’s about accuracy. Is the university of oxford university pedantic when correctly expressing the Goldbach conjecture -namely, (1) excluding the number 4 [as here should be done for the number 3] and (2) replacing “is” by “can be”?
It is as well rich the literature where you can read “with a mass of 1kg”… but it does not mean that is correct.

1) a typo occurs at page 3,eq. (1.4); by the PNT, the L^2 estimate for S should be X \log X and not \log X

2) the use of smoothed version of S is well known; in fact the original Hardy-Littlewood approach uses an infinite series over primes weighted with an exponential weight. See HL, partitio numerorum III. This should be mentioned in the introduction, I think.

[These will be addressed in the next revision of the ms – thanks! – T.]

[…] to be peer reviewed but the probability that it’s correct is pretty high. Tao has a nice discussion and summary of his approach on his blog — although it employs some powerful modern mathematical tools, […]

Please excuse me for going slightly off-topic, but I have a question regarding series. Is it possible to prove that in the binomial (x + y)^(k) the coefficient of the y^(2)-term >> +infinity when k >>+ infinity?

In section 2 you define floor(x) — well, actually the usual broken-bracket notation — to be “the greatest integer less than x”. Is that, rather than “the greatest integer not exceeding x”, really what you meant? It means that, e.g., [4] = 3, which is not the usual convention.

And what happens with the number 3 ? You mean 3+0 ? 2+1?
Please explain to me. The algebraic sum is not defined for only one term. Then your theorem is not valid for the number 3.
I think you are using the old terminology of Goldbach when he considered 1 as a prime.
If the answer is that you can only sum one term, then the theorem should be reducible to “any COMPOSITE odd number….” since for the prime numbers would be trivial case.
In any case I find this unacceptable with the definition of sum.
I am no mathematician… then I must be wrong.
In what am I wrong?

According this “redefinition of sum” the Goldbach conjecture as said by Euler would not say greater or equal than 4, rather would include all even numbers, namely the number 2.
I think Euler is on my side.

I insist then in the other argument. Look for definition of addition in any book of algebra and try to find if it is possible to sum “only one sumber”.
See wikipedia as well. The summation allows it. The sum (addition), does not allow it.

“Does the theorem say this is always possible with a set fo primes that has no repeats? e.g., would (3+3+3) be enough to demonstrate that this holds for 9?” (doubt of Vynce Montgomery I saw in a post of G+)

anon, You know what the problem I have? That sum can be translated into Spanish either as “addition” (suma=adición) or “summation” (sumatorio) to the diffence of English. If I am not wrong, SUM in English is at the same time addition and the short for summation.

I think that we are talking about two things. sum (addition, +) and sum(mation, epsilon).

I understand that the addition is by definition containing more than one terrm (all definitions I checked so far say that since the operator + needs two terms on each side, it is impossible to sum or add one element of a set to nothing, in any case to 0 that is not prime).

On the other hand, the summation by definition allows the existence of only one term.

Then the theorem when it is used “addition” (suma) should not be true for 3. If I am right. It would be correct is sum is translated as “summation”.

In other words, the addition in algebra is defined for one term? I do not think so. And: is summation defined for one term? I think yes.

And… on the other hand, if sum is not considered as addition, I can demonstrate very fast this theorem, that would be absurd to claim:
Every prime is the sum of at most five primes.
Even Every prime is the sum of at most ONE prime

And to finish. If the title of the article would be right, the Goldbach Conjeture as redifined by Euler would be all positive integers, and not restricted to greater than 4, as it says now.

From wolfram website:
As re-expressed by Euler, an equivalent form of this conjecture (called the “strong” or “binary” Goldbach conjecture) asserts that all positive even integers GREATER THAN 4 can be expressed as the sum of two primes.

So, I am fully convinced that the title of this paper is plainly wrong ;-( The sum is understood as addition, and not only summation in degenerated state.

I look so confused, so let me explain it to you : This is Pr. Tao’s blog here, so it is not well seen by many of his young fans to critisize the title he chose for his article. This is in fact a pre-print, which means it can still be corrected in the final version.

Now we all know what a “pre-print” means : carefulness and further corrections.

As far as I understood, the proof is independent from Riemann’s hyp. If it is false (is this going to be proved???) would not be a problem, anyway. Conversely, since Riemann hypoth implies that paper, IF that paper would be false (it is not) then Riemann would be false (??).

Giovanni, the number of primes needed was already known to be finite but I think the previous number was absurdly large — every positive integer is the sum of at most 100,000 primes, or something like that.

Riemann, it was already known that if the Generalized Riemann Hypothesis is true then 3 primes suffice for all odd numbers and that if the plain ordinary Riemann Hypothesis is true then 5 primes suffice for all. Prof. Tao’s recent work shows that the second of these is true without RH.

Reblogged this on Between numbers and commented:
Check this beautiful result! Even if you don’t know anything about number theory, this theorem leaves you pleasantly shocked. Kudos to professor Terence Tao. :)

Please excuse this question if it is answered in your preprint (I did skim through it). Is there something that prevents your methods from showing that every even integer is the sum of 4 or less primes? (which seems to be somewhere inbetween the odd Goldbach conjecture and your result)

Well, in my paper I spend two of the five primes by using the existing numerical verification of the even Goldbach conjecture that every even number between 4 and is the sum of two primes. As such, the problem reduces to a variant of the three-primes problem in which one is trying to represent a large number as the sum of three odd primes and a number less than . This has the effect of restricting to a single major arc in the circle method, namely the one near the zero frequency, for which one can use the (very substantial) numerical work on the Riemann hypothesis.

To mimic this for the four-primes problem, one could try to write a large number as the sum of three primes and a prime less than, say, 10^12. The Fourier transform of the von Mangoldt function on the latter set can be computed numerically (and one can use GRH to predict in advance what the statistics of that Fourier transform will be). However, it will not be completely localised to the principal major arc at 0, but will also have some presence at other rationals a/q for small values of q, simply because the primes involved will mostly be coprime to q. Because of this, when the time comes to apply the circle method, one will eventually have to obtain good prime number theorems in arithmetic progression modulo q for small q, or equivalently to use zero-free regions for Dirichlet L-functions with conductor q. There is certainly some numerical work in this direction, but it is nowhere near as strong as what is known for the Riemann zeta function. As a consequence, I would expect that in the absence of some new ideas, the methods here might be able to show that every even number larger than, say, , is the sum of four primes, but probably would fall short of solving the four-primes problem completely. (I haven’t actually done the calculation, though.)

Similarly, I would imagine that one could insert the bounds here (together with recent work on zero-free regions for Dirichlet L-functions, such as that of Kadiri) to improve the Chen-Wang result on the three-primes problem to lower the current threshold of to something a bit smaller (I would guess something like , though again I have not actually done the calculation, and these guesstimates may be too conservative).

Bennett: As I understand it, Tao’s proof proves that all possible numbers should never need more than 5, but all numbers that we’ve ever actually tested need at most three (two for the even numbers, three for the odd numbers). Tao has proved an upper bound in theory, but in practice, the upper bound is less than that. So nothing requires all five, but in theory, there may be such a number way way way out along the number line… or it may be that someone will find a better proof in the future that brings that upper bound down.

Bennett: The Goldbach conjecture asserts that every odd integer can be written as a sum of three primes (or equivalently, that every even integer can be written as the sum of two primes), and these conjectures are widely believed to be true. If we could show that there was an integer which could not be written as the sum of four primes (i.,e, if we could find an integer that needed five primes), then this would disprove the Goldbach conjecture.

Unfortunately, to get to at most four primes by this method, one would start having to get good prime number theorems in arithmetic progressions, which would require some numerical work on GRH in addition to RH. There is such work out there, but it is nowhere near as strong as the numerical work on RH.

On the other hand, if one inserted the best numerical work on the even Goldbach problem, it would be possible to reduce the reliance on Vinogradov-type results. Currently I am using the strongest known result in this direction, namely that every odd integer greater than is the sum of three odd primes, proven by Liu and Wang. If one used, for instance, the even Goldbach numerics up to (instead of ) then I could replace the Liu-Wang result with the earlier result of Chen and Wang that used instead of . Also, better numerics would also give more precise bounds on the number of representations of an odd number as the sum of at most five primes (though actually my method does not count all such solutions, but rather a somewhat technical weighted subcount of these solutions, so this refinement would not be of much external interest).

Unfortunately not; this problem is in fact more or less equivalent to the question of whether every even number can be written as the sum of at most four primes, since to express an even number as the sum of five primes, the five primes cannot all be odd, and so one of them must equal two.

On the other hand, it is a corollary of the odd five primes result that every even number can be expressed as the sum of at most six primes.

[…] the results be all odd integers larger than 1 can be expressed in sum of no more than five primes every odd integer larger than 1 is the sum of at most five primes. But most importantly, it brings us incessant challenge, so those guys who seeks for significance […]

[…] problem a relatively recent update on Goldbach conjecture has be given by a Fields medalist. https://terrytao.wordpress.com/2012/02/01/every-odd-integer-larger-than-1-is-the-sum-of-at-most-five-… Share this:TwitterFacebookLike this:LikeBe the first to like this […]

[…] famous theorem that every sufficiently large odd integer is the sum of three primes; my own result that all odd numbers greater than can be expressed as the sum of at most five primes is also proven by essentially the same method (modulo a number of minor refinements, and taking […]

[…] Recently, Terry Tao proved that every odd integer is the sum of at most 5 primes. The key thing is that he doesn’t have to assume the validity of the Riemann Hypothesis to do it. So Kaniecki’s theorem is true without any qualifications. It’s a very nice result and Dr. Tao posted about it on his blog.** […]

I don’t understand one thing, how does this proof work for all odd integers greater than 1 and not rely upon the Riemann hypothesis?

The number of odd integers greater than 1 is infinite.
This proof relies upon the numerical verification of the Riemann hypothesis up to some height.

Since Kaniecki proved that every odd integer is a sum of at most five primes assuming that the Riemann Hypothesis is true, how does this proof differ much from Kaniecki’s since it relies upon the numerical verification of the Riemann hypothesis up to a certain height?

Kaniecki’s argument and mine use the Riemann hypothesis (or partial numerical verifications of it) in somewhat different ways. Kaniecki uses a quantitative version of the a well-known fact that if the Riemann hypothesis was true, then almost all prime gaps would be very small, which (together with numerical verification of the even Goldbach conjecture) implies that almost all odd numbers (in a certain range) are the sum of three primes, and hence that all even numbers (in a similar range) are the sum of six primes. (I’m glossing over a number of details here.) Unfortunately this argument doesn’t work too well with only a partial numerical verification of RH, more or less for the reason you point out, namely that one has to verify the conjecture for an infinite number of numbers. (Actually, thanks to the Vinogradov theorem results of Liu and Wang and others, one only has to verify the numbers up to 10^{1300} or so, but even in this range Kaniecki’s argument would still require far more numerical verification of the RH than is currently available.)

In my argument, numerical verification of RH is used in a slightly different way, to get a reasonably good error term in the prime number theorem, which in turn leads to control of prime exponential sums at very low frequencies. This, by itself, is nowhere near enough to settle the conjecture (in contrast to Kaniecki’s argument, in which RH is the main tool), again because the numerical verification of RH falls well short of what would be needed to cover the range of numbers desired without additional tools; the estimates from numerical RH have to be combined with effective minor arc estimates (to handle medium frequencies) and numerical verification of Goldbach (to handle large frequencies) in order to cover all the contributions to the counting function associated to this problem.

According to the preposition, every odd integer greater than 1 is the sum of “at most” five primes. But, if we had an odd number of odd primes, their sum will always be odd. So, it could be possible that an odd number be a sum of 7 odd primes or 9 odd primes.Then the preposition no more remains true. For example : 3+5+7+11+13+17+19= 75. But 75 here is a sum of 7 primes. ?????????

Hi Tao, do you think that larger numerical verification can be useful to improve this result? if yes, have you an idea of what exponent we must reach at least?
Thank you and congratulation for the result.
Silvio

Just came across this. Regarding inputting numerical verification of RH- most verifications of RH have been experimental and cannot be used as the basis of proof of a rigorous mathematical statement. That includes the computations referenced.

They used explicit truncation bounds and interval arithmetic to bound accumulated round off. In principle, their work provides a rigorous verification of RH. However, there is of course a lot that takes place in a black box (ex intel chips) or at a very low level (compilers, assembler). Hardware has never been certified rigorously, and, there have been cases of buggy hardware uncovered experimentally. Compilers also are constantly having bug fixes (look at the version history of any compiler).

There are some easy test that one can do with zeta zeros to increase one’s level of confidence. For me, testing the explicit formula with many test functions gives me confidence that a given zeta-zeros computation is correct.

What are the greatest open problems in math are we closest to solving?…

Terence Tao recently proved that every odd integer larger than 1 is the sum of at most five primes. This advance pushes closer to the odd Goldbach conjecture, which replaces five primes with three primes. He said in a talk recently that we are probably…

Is the paper already being published?
Today appears in the news (SUPER SCHOLAR test) that Terrence is one of the most inteligent HUMAN BEING alive (IQ 230).
With that IQ is no merit to find such math wonders ! ( xD ) Congrats!

I have seen several references saying Borozkin was the first to find an explicit value for the Vinogradov bound for the odd Goldbach conjecture and he showed the value to be 3^3^15 to be it (that is 3 to the 3rd power to the 15th power).

However, this result is not published. I would love to figure out how this bound was determined. Do you know of any reference that shows how this bound is determined? Is it so simple that you can give a brief, high-level explanation for it on here?

Dear Professor Terry Tao,
the proof has many special theorem, i can’t understand fully. i wonder what is the large even number?
if 2 is large number, it can’t express as any sum of primes.
if inifite is large number, it has 1 add number being inifite always.
you can look at my idea in http://vixra.org/abs/1301.0129
Sencerely,
Liu Ran

Dear Professor Terry Tao,
you can look at my new article in http://vixra.org/abs/1304.0020
both experiment and logic have demonstrated prime being infinite is wrong.
it means Euclid’s proof had a issue which had misled machemacian for many years.
i admire you, wish you can discuss this problem. maybe it’s a big event.
Sencerely,
Liu Ran

[…] Today, part of the appeal of prime numbers is that there remain many apparently-simple questions about primes whose answers are unknown. (One famous example is Goldbach’s conjecture, posed in 1742 in a letter to Leonhard Euler, that every even integer > 2 can be written as the sum of two primes. Note, however, that Terence Tao has recently proven that every odd integer > 1 is the sum of at most 5 primes). […]

Odd integer = 17+11+23+5+29+3+31 = 119 derived from the
following series of twin-primes anchored by 17:
f(17,17,17)+f(17,11,23)+f(17,5,29)+f(17,3,31)
This is the sum of seven primes and is odd.
The sum has more than five primes as stipulated by you.
Am I wrong?
Huen Y.K.

Please test this Maxima line on Maxima Online:
for n from 2 thru 20 do print(sum(f(n,n-k,n+k)*(is(equal(primep(n-k)*primep(n+k),true^2))-unknown)/(true-unknown),k,0,n));
First six outputs are shown here. all are twinprimes including identical primes.
f(2,2,2)
f(3,3,3)
f(4,3,5)
f(5,5,5)+f(5,3,7)
f(6,5,7)
f(7,7,7)+f(7,3,11)

These show that every prime is half the sum of two identical copies of itself. Therefore 1 should qualify as a prime i.e. 1 = (1+1)/2.
Huen Y.K.

It’s simply a question of what’s a useful definition. In stating the fundamental theorem of arithmetic, for example, and lots of others, we’d have to say “every prime except 1″, and when stating results about prime ideals we might have to say (when treating the ring as a ZZ module) “every prime apart from 1″, etc. So instead of doing that, we just exclude 1 from the definition.

Of course, if for your purposes you wish to define it to mean something different, then the only thing you have to do is define it that way in your paper.

Thanks for your thoughts (and time) – personally I don’t think 1 is a prime as it can be described as n to the power of 0

For the purposes of my conjecture I would exclude 1 as a prime and say that 2,3 and 11 are the only prime numbers that cannot be expressed as a sum of a number of unique primes (ie only using each prime once).

For commenters

To enter in LaTeX in comments, use $latex <Your LaTeX code>$ (without the < and > signs, of course; in fact, these signs should be avoided as they can cause formatting errors). See the about page for details and for other commenting policy.