Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

stephen.schaubach writes "Spanish Mathematicians have discovered a new pattern in primes that surprisingly has gone unnoticed until now. 'They found that the distribution of the leading digit in the prime number sequence can be described by a generalization of Benford's law. ... Besides providing insight into the nature of primes, the finding could also have applications in areas such as fraud detection and stock market analysis. ... Benford's law (BL), named after physicist Frank Benford in 1938, describes the distribution of the leading digits of the numbers in a wide variety of data sets and mathematical sequences. Somewhat unexpectedly, the leading digits aren't randomly or uniformly distributed, but instead their distribution is logarithmic. That is, 1 as a first digit appears about 30% of the time, and the following digits appear with lower and lower frequency, with 9 appearing the least often.'"

Numbers are objects, I wish people would understand that numbers are just distinctions. The whole of mathematics is really just a language of form and structure, a system to systematize and decribe structure and forms (relationships are a type of form).

But how many would contain all 1s? Answer that, and provide a proof for your answer, and you'll make math history.

Obviously the number of digits would have to be a prime number. But not all prime number of digits would give you a prime number. The first case is when there are 11 digits, the number would be 23*89 in that case.

But how many would contain all 1s? Answer that, and provide a proof for your answer, and you'll make math history.

For those who didn't get it: it's not known whether there are infinitely many Mersenne primes [wikipedia.org], which have this form in binary (they're primes of the form 2^n - 1). Similarly, if you could figure out how many primes have only their first and last bits equal to 1, you would answer a longstanding question about Fermat primes [wikipedia.org] (which are primes of the form 2^n + 1).

Since RSA relies on it being a Hard Problem to factor a number that is the product of two very large primes, it potentially introduces a weakness, as it presumably means some of those products will be easier to factor than others. A number that can be shown to be the product of two primes that both start with 9 should be much easier to work on than a number where both primes start with a 1, as there are far fewer 9- primes and therefore a smaller search space.

It wouldn't change the logarithmic nature of the distribution of the digits, AFAIK.

My math degree is getting dusty, but I'm pretty sure that the same pattern could be represented in another base by changing their generalization of Benford's law to include it, and the distribution would look like log(x)/log(9) or log(x)/log(11). Remember, changing the base of a logarithm is easy: for example, log(x)/log(e) = ln(x)

(Warning, IANAM)It's base independent. Basically, primes are distributed on a logarithmic scale (prime number theorem). For sufficently large intervals, there are always more primes in interval starting at x than the interval of the same size starting at y if xy. Like, there are more prime numbers in 1000..1999 than in 9000..9999.

I've often wondered how many patterns we are missing, especially in regards to our "special" numbers (Pi, Phi, e, primes....) because we mostly deal in base10. If we suffer a Class 1 or Class 2 Apocalypse [tvtropes.org], I intend to seize the opportunity and implement hexadecimal as a default number system.

Benford's law works by the observation that, when numbers come up in certain real world contexts, the fluctuations you get in numbers should be proportional to the numbers themselves. Phrased differently, variations tend to be relative, not absolute. Because of this, if you have a very large range of random numbers from many real world measurements, then you would expect the number between t and t*(1.0001) not to vary too much for small changes in t. Let us try to use this observation very coarsely. Among the numbers with 6 digits, the number that look like 1xxxxxx (those between 100000 and 200000) should be about the same the number between 200000 and 400000. The same thing happens with the numbers with 5 digits or 7 digits or n digits (assuming that you have a wide range of random numbers, and the numbers are the kind that come from certain sorts of real world measurements). Additionally, you can get distributions for the first two digits, the first three digits, etc.

This observation doesn't depend on the base that you're working with.

Now, with the prime numbers, they have a distribution that is different from a lot of real world measurement data. The number of primes between n and n+d is approximately d/ln(n), where ln is the log with base e and d is small compared to n. So the number of primes between 500000 and 600000 is about 100000/ln(500000), and the number of primes between 500000 and 600000 is about 100000/ln(600000). By using this, and being slightly more careful, one can determine fairly easily the distribution of the leading terms of the prime numbers.

This is not a hard result. I would say that any professional mathematician who knew about the basic distribution of the primes could derive the distribution of the leading digis of the prime numbers fairly easily if anybody actually asked them to. The reason nobody mentioned this before is that nobody actually cares. While Benford's law does have applications to fraud detection, this new result does not. It's one of those things that makes people say "ooh, a pattern!" but which is just an easy and somewhat mundane corollary to a well known theorem.

Benford's "law" is not a law at all... any exponential distribution will exhibit this behavior.

A law, as the word is commonly used in math and physics, is a mathematical expression of a universal relationship. As you say, Benson's law is a property of any exponential distribution, so we agree it's universal. Why then can't we call it a law? Just because it's obvious after you understand it doesn't make it any less a law.

I'm not a mathematician, could someone explain why this is surprising in terms that a computer programmer or biologist could follow? First thing I thought - no matter how many innings you have, you can guarantee that the top of the order will be up at least as many times as the bottom of the order.

Okay, if you have a random number along the interval (1,10^X), all the leading digits will be equally likely.

If you have some other interval (1,n*10^X), 1<=n<=9, then the leading digits > n will appear roughly 1/10 as often as leading digits 1..n.

If you have a large sample which is drawn from an admixture of some huge number of random distributions (1,n*10^X), with the "n" of each sub-distribution evenly distributed on 1..9, then the lower leading digits will be moderately more common, yeah?

Prime numbers, meanwhile, become decreasingly common as you get larger and larger, is that not correct? So it seems to me this is the obvious way to model prime numbers, no?

Prime numbers, meanwhile, become decreasingly common as you get larger and larger, is that not correct?

Yes, that is correct. There are roughly logarithmically many of them.

Bertrand's Conjecture (proven by Chebyshev) states than for all n > 1, there's a prime p with n < p < 2n.

If you look only at powers of two, it's readily seen that there are n primes between 1 and 2^n; setting k=2^n, there are log(k) primes between 1 and k.

A logarithmic upper bound follows from the Prime Number Theorem, which doesn't have an easy proof (AFAIK). It says something much more specific than just "It's O(log n)", though. Maybe there's a simple theorem from which you can derive O(log n), but I don't know.

I am admittedly not a mathematician, but I do have a good understanding of economics and finance, and I am not seeing how a pattern found in prime numbers could have any application to stock market analysis. Where is the interaction between prime numbers and the praxeology of buying and selling securities? Even if you're only focusing on automated buying and selling, those algorithms were still programmed by humans with their own subjective approaches and underlying premises.

Benford's law can be used to detect fraud (the article states this, I don't have any reason to doubt it). They studied primes and found a pattern that is associated with a related property that they are calling Generalized Benford's Law. Presumably, the generalized rule can be used to detect a wider range of unnatural activity than Benford's law itself.

I've always wondering how I could figure out when someone was trying to pass off a list of fraudulent primes. Glad to see that this problem is finally solved!

You're jesting, but I imagine that many fields of encryption would benefit from this, like dual key encryption, where the security lies in the ability to trust that the product really is of two primes, and that factoring this would be extremely time consuming.

Sets with a backdoor inserted may indeed have a different signature, and to be able to quickly see that one set differs would be invaluable. It wouldn't prove anything, but if, say, keys received from a certain company's key generator stood out like a sore thumb in a Benford distribution check, you would have reason to suspect foul play, incompetence or both.

I am admittedly not a mathematician, but I do have a good understanding of economics and finance, and I am not seeing how a pattern found in prime numbers could have any application to stock market analysis. Where is the interaction between prime numbers and the praxeology of buying and selling securities?

By understanding the patterns in prime numbers you can learn to spot them and avoid the sub-prime mortgage backed securities. Duh.

I find it interesting that the article doesn't prove any theorems. At least searching for the word "theorem" in the pdf only gives references to other theorems. Searching for "proof" gives no hits.

That leaves me thinking: what does this article tell us that we couldn't find out ourselves by ripping through some prime numbers? I thought the real power of math was to say something 100% certain about some infinitude of stuff, so we don't have to go and check every case by hand.

Oh well, I guess every open question needs some results on the form "this holds for all n <= bignum"; say, like the Goldbach Conjecture (every even number n > 2 is the sum of two primes).

> That leaves me thinking: what does this article tell us that we couldn't find out ourselves by ripping through some prime numbers?

Nothing?

The important thing is that they ripped through some prime numbers and did notice, and they were the first to publish what they noticed.

The world moves forward in tiny steps like this. Maybe the next mathematician gets his 'Ahuh' moment on the back of an insight like this and bang modern crypto is fucked. He might even be able to prove it for you.

Their experimental result is a trivial consequence of the fact that prime number density around n is about 1/log(n). One could work out the exact theoretical distribution in one paragraph and that'd be all. I guess the authors are either ignorant or they prefer to market their result as "mysterious". Probably both.

OK, not the most exciting science story of all time. Perhaps Carl Sagan either implanted or discovered a potential capacity for fascination with the science of primes in his novel "Contact" where a large sequence of prime numbers is used as an attempt by extra terrestrials to communicate with humanity.

Before I begin, I am a math phd candidate, but not in number theory. The following is probably better than a lay interpretation, but not an expert either.

Basically, they have generalized BL (Benson's Law) to get a GBL. They then tested the primes in the range [1,10^11] against GBL, and verified they were satisfied. They DID NOT PROVE THIS HOLDS FOR ALL PRIMES!!! They then went on to conjecture the applications of this to other areas (finance, etc).

It has less to do with math and more to do wit physics: as in how to use a an old school phone. Phone numbers, until comparatively recently would "prefer" lower numbers because they are EASIER TO DIAL. If a company had the phone number (909)999-9009 you would HATE dialing that thing. It would take about half a minute just to dial the damn number.

Ssssshhhhhhik!
diggadiggadiggadiggadiggadiggadiggadiggadigga!

Total pain in the finger.

1 as a first number was reserved for "other stuff" like international calls, so the lowest possible area codes (first numbers) went to places like New York City (212 - very quick to dial) or LA (213) because millions of people would be dialing that number, so it made for an overall faster dialing experience for (on average) more people.

This is compared to the relatively few people who lived in more obscure parts of the country, like Saginaw MI (989) or Bryan TX (979).

So, you have millions of phones in 212, thousands in 979. The result: saved effort in dialing.

Also, to this end there was a preference for exchanges to have lower numbers as well to save on dialing effort, and phone numbers with lower (but NON-ZERO) values were sought after. You'd see advertisments like "Call RotoRooter - 213 464 1111 !" or "Call us NOW for a free analysis! 201 738 1122 !" etc. and so on.

So, lower numbers in phone numbers have been a product of primitive dialing technology. Now with touchtone - all that is out the window - but the historic trend is still there and quite powerful - people will pay good money for a 212 area code for the distinction of being in the "real" New York Area code...

While you're absolutely right about the reasoning behind NYC, LA, and Chicago getting 212, 213, and 312, you're a little off on the 989 and 979 area codes, which are much more recent.

In the original system design, all area codes had a middle digit of 0 or 1. The convention was that a middle digit of 1 was used for area codes that only covered part of a state, while a middle digit of 0 was used for area codes that covered entire states. Furthermore, an area code could not begin with a 1 or a 0. and an area code with a middle digit of 1 couldn't have 1 as the third digit. (This left the shortest dial time area code for a statewide code as 201, which went to New Jersey.)

As early as the late 1950s, the idea of single area codes for some states went out the window (with NJ splitting into 201 and 609 in 1958) because of increasing population and proliferation of phone service.

By the late 1980s, the rules were further changed to allow for area codes with middle digits other than 1 or 0. Area codes like 989 and 979 weren't introduced until the late 1980s at the very earliest, by which point very few people were still using rotary phones. At one point, I had heard that the middle digit value of 9 was reserved for the future to allow for four digit area codes, but I can't vouch for the accuracy of that recollection. There are plenty of other rules, some of which you can see summarized here... [wikipedia.org]
-JMP

So, you have millions of phones in 212, thousands in 979. The result: saved effort in dialing.

Nice idea, but you give the phone company too much credit. In the old days telephone switches still used physical relays (this is well before transistors were invented). This significantly limited the number of connections in progress each switch could handle. Since switches are expensive, you naturally wanted to pass on the call as fast as possible so you could free up the switch for the next caller. A number like '212' wasn't just easy to dial, it was fast — remember this is the era of pulse dialing as well, so a '9' took literally 9 times longer to dial than a '1'. Assigning fast numbers like '212' to New York saved money for the phone company because Ma Bell could buy fewer switches. Any benefit to the customer was purely accidental.

This is one of those moments that I love/.Personally, I was trying to calculate the first 50M primes using the sieve of Erastothenes and then contructing a program that categorizes them but since you are doing all the work I say go ahead and I'll wait for the results.

was busted by auditors who found the books were "cooked" by applying the law of first numbers described in the/. blurb and TFA. The independent auditors found the first figures were randomly distributed instead of following Benford's law with the number 1 the most plentiful and nine the least -- therefore, the entries were fraudulent.

Benford's law knocked my out at the time; I thought of how many bogus figures I had entered in my expense accounts over the years....

At smaller scales than Enron, Benford and other related number distribution analysis schemes are indeed used to find fraud.

Know the time report you fill out for the company you work at? There's a chance that it passes through a filter. If you make up figures for how long you spend at certain tasks, someone higher up may see it with a footnote saying "High probability of data being fictitious". If this pattern repeat itself over months, don't be surprised if your chances of retaining your job diminishes.

On the other hand, you can use the statistics for your own advantage too. Not the least in games and gambling, where guessing your competitor's status and actions can change the odds quite dramatically. Including games and gambling you don't think of as games and gambling, like placing bids in auctions.

I also like their explanation for "why":"Imagine that you deposit $1,000 in a bank at 10 percent compound interest per year. Next year you'll have $1,100, the year after that $1,210, then $1,331, and so on. The first digit of your bank account remains a "1" for a long time.

When your account grows to more than $2,000, the first digit will remain a "2" for a shorter period as your interest increases. And when your deposit finally grows to more than $9,000, the 10 percent growth will result in more than $10,000 in your account the following year and a long return to "1" as the first digit. "

If you had asked me about the distribution of first digits of prime #s yesterday I probably would have guessed logarithmic, regardless of base (except for binary, of course).

Think about it. We know that primary # are distributed logarithmically. A set of N digit #s has equal subsets of numbers starting with 1, 2, 3, etc. Those subsets are equal in size, exclusive and completely ordered with respect to each other. So it follows that the # of prime #s in consecutive subsets would be a logarithmic function. And if you add the sizes of prime subsets for each starting digit, you'll still get a logarithmic distribution.

If this is the first time you've run across Benford's law, you might thought to yourself, "Wow, I can use that to predict large prime numbers! I'll just convert numbers to base X, where X is really big, and only check numbers that start with 1."

It's worth actually trying this, if you get a minute. You'll find that as X gets large, the number of primes that start with 2 gets closer to the number of primes tat start with 1. As X approaches infinity, your distribution approaches a smooth logarithmic curve.

It's neat to see it yourself. This gives you an easy way to experiment with an infinite, easily generated set of numbers that follows Benford's law. It turns out that math actually works, oddly enough.

The prime number theorem was conjectured in 1796 by Adrien-Marie Legendre and proved in 1896 independently by Jacques Hadamard and Charles Jean de la Vallée-Poussin. It says that if pi (N) denotes the number of primes p = N, then pi (N) / (N / ln N) converges towards 1; accordingly the number of primes between A and B is about (B / ln B - A / ln A). This shows that there should be slightly more d digit primes starting with 1 than with 2, 3, 4 etc. A reasonably good approximation is that the number of d digit primes starting with 1 is not 1/9 th of all d digit primes, but more precisely (11 1/9 + 5.7 / d) percent. This is all very, very simple maths. I don't think it hasn't been observed before, it was just never considered worth mentioning.
However, the prime number theorem alone is not enough to prove this; it would be necessary to prove that convergence happens at a certain speed. So anything that these so-called "mathematicians" claim that depends on observations of large list of primes is pure nonsense.

I was thinking this myself at first, but apparently something goes wrong with it, because (per TFA) the distribution actually moves away from Benford's Law weighting and toward uniformity as the sample size grows larger. The specific rate of this movement is somehow described by a generalization of Benford's Law; while BL at one point uses 1/x, GBL uses 1/(x^a), with BL as the special case where a = 1.

So I read the comments and see that I need to do this in ranges or 1 to 100, 1 to 1000, etc. Which is fine, I've added another R method and would post the code here if it didn't yell at me for junk characters. So here are your Benford lists:

hello troll, your inability to understand mathematics does not mean it has no real world application. her little project may well have been able to provide the basis for some ecomonic or social model, or may proove vital in unlocking the bit of physics that enables the next revolution in technology. Besides all these very important uses that skip the average joe, mathematics is often elegant and beautiful, and may be considered a form of art by some people

Perhaps she was wondering the same about you as you walked away looking dumbfounded.

Just because something is complicated and difficult for most people to grasp doesn't mean it hasn't got some real-world application at some point. That's why we need people like her to make sense of that sort of stuff, to the benefit of the rest of us.

For the same reason some people take Philosophy, Ancient Literature, Paleontology, etc. Because they think the subject is cool, and aren't necessarily at school to learn a trade. (Indeed, Engineering students that are paying attention also discover they aren't directly being taught a trade either. Or at least they aren't in any Engineering college worthy of the name.)

They want to become an actuary. This is a fairly well-paid job that is also rather difficult to do, and even harder to do well.

They want to become math teachers; a valuable and much-needed profession. Math is a useful tool in teaching students how to think. We certainly don't torture legions of high school students with the details of conic sections because anybody is under the impression this is a directly practical skill for most citizens to have. Nor are hundreds of thousands of college students subjected to the horrors of calculus because of some kind of employment program for math post-docs.

They are double-majors in a field in which math is extremely important (physics, astronomy, computer science, every type of engineering, linguistics, medicine, biology, etc. Pretty much every field outside the humanities. Oh, and some of the humanities make extensive use of math too.)

...It's the same argument 'when am I ever going to use algebra/geometry [as taught in high school]'.

As an electrical engineer, in undergrad, we were expected to already know a fairly large amount of algebraic and geometric/trigonometric relationships from high school and we never went over those principles in class. Now, if you're not going into a scientific/engineering/mathematics degree you're probably never going to need to use those principles, but it's a good thing to learn incase you don't know whether you want to be a technical student in college (if you even end up going).

As an electrical-engineering undergraduate... I would think that most people that go through a pure mathematics degree genuinely enjoy these processes... I can guarantee you that this mental training does give me an edge

Could this be the beginnings of a non-quantum solution to, say, the problem of factoring large numbers? Not a solution itself, but the beginnings of a method for breaking RSA without resorting to the use of q-bits et. al.?

Probably not. It's an interesting result, but it's not specific to prime numbers. Rather, it applies to all sets whose density is related to 1/log n.