It’s not every day that scientists observe a new species emerging in real time. Charles Darwin believed that speciation probably took place over hundreds if not thousands of generations, advancing far too gradually to be detected directly. The biologists who followed him have generally defaulted to a similar understanding and have relied on indirect clues, gleaned from genomes and fossils, to infer complex organisms’ evolutionary histories.

Some of those clues suggest that interbreeding plays a larger role in the formation of new species than previously thought. But the issue remains contentious: Hybridization has been definitively shown to cause widespread speciation only in plants. When it comes to animals, it has remained a hypothesis (albeit one that’s gaining increasing support) about events that typically occurred in the distant, unseen past.

Until now. In a paper published last month in Science, researchers reported that a new animal species had evolved by hybridization — and that it had occurred before their eyes in the span of merely two generations. The breakneck pace of that speciation event turned heads both in the scientific community and in the media. The mechanism by which it occurred is just as noteworthy, however, because of what it suggests about the undervalued role of hybrids in evolution.

Eyewitnesses to Speciation

In 1981, Peter and Rosemary Grant, the famous husband-and-wife team of evolutionary biologists at Princeton University, had already been studying Darwin’s finches on the small Galápagos island Daphne Major for nearly a decade. So when they spotted a male bird that looked and sounded different from the three species residing on the island, they immediately knew he didn’t belong. Genetic analysis showed he was a large cactus finch (Geospiza conirostris) from another island, either Española or Gardner, more than 60 miles away — too great a distance for the bird to fly home.

Tracking the marooned male bird’s activity, the Grants observed him as he mated with two female medium ground finches (G. fortis) on Daphne and produced hybrid offspring. Such interbreeding by isolated animals in the wild is not uncommon, though biologists have usually dismissed it as irrelevant to evolution because the hybrids tend to be unfit. Often they cannot reproduce, or they fail to compete effectively against established species and quickly go extinct. Even when the hybrids are fertile and fit, they frequently get reabsorbed into the original species by mating with their parent populations.

But something different happened with the hybrids on Daphne: When they matured, they became a population distinct from Daphne’s other bird species by inbreeding extensively and exclusively — siblings mating with siblings, and parents mating with their offspring.

In short, an incipient hybrid species, which the researchers dubbed the Big Bird lineage, had emerged within two generations. Today, six generations have passed, and the island is home to around 30 Big Bird finches. “If you were a biologist none the wiser to what had happened,” said Leif Andersson, a geneticist at Uppsala University in Sweden and one of the study’s co-authors, “and you started studying these birds, you’d think there were four different species on the island.”

Where Hybrids Thrive

On Daphne Major, the conditions may have been just right for hybrid speciation. “It shows what is possible, given the right circumstances,” Peter Grant said, and it sends “a valuable message about the importance of rare and unpredictable events in evolution. These have probably been underestimated.”

The Big Bird lineage became reproductively isolated so quickly because those birds could not successfully attract mates among the island’s resident species, which preferred their own kind. Big Bird finches couldn’t pass muster: They had relatively large beaks for their body size, and they boasted a unique song. These differences prevented gene flow between the hybrids and the native medium ground finches from which they had descended, leading to a distinct hybrid population. (In their Science paper, the Grants and their colleagues noted that the species status of Big Bird finches is still unofficial because no one has yet tested whether the birds will breed with their ancestral finches on Española and Gardner. But they cited reasons to suspect that the Big Bird lineage is reproductively isolated from them as well.)

The physical differences in the hybrid lineage also made them competitive. The size and shape of the Big Birds’ beaks placed them squarely in their own ecological niche, allowing them to eat certain types of seeds their competitors could not. “The data is consistent with selection having taken place,” Andersson said.

The fact that this niche was available for Big Bird to occupy is likely a result of the particularly young, isolated and often extreme environment of the Galápagos. “Conditions on the islands really help drive the speciation process,” said Scott Edwards, an ornithologist at Harvard University.

The same might be said for small and isolated environments elsewhere, such as mountaintops or ponds. By contrast, speciation isn’t likely to occur this way in less isolated regions, said Trevor Price, an ecologist at the University of Chicago. In those areas, where competition for resources is already fierce, a new hybrid species like Big Bird would find no niche for itself.

But hybrid species may have had more widespread opportunities to establish themselves in the past. Perhaps, Price suggested, this rapid production of species could have occurred in the aftermath of the meteorite impact that caused mass extinction on Earth millions of years ago. At that time, there were resources and potential niches, and not enough species to fill them.

Fractal Speciation

Some experts think that even today hybrid speciation may be far from rare. Under the most commonly accepted speciation model, called allopatric speciation, populations get geographically separated — by a change in a river’s course, say, or the formation of a mountain, or diverging migrations — and then adapt to distinct competitors and environments over long periods. If the groups ever meet again, they may no longer be similar enough to interbreed.

But very fast hybrid speciation events like the one the Grants saw on Daphne may often occur in bursts — only to end with the new species dying out before we have time to observe it. “Speciation is common. It’s happening all around us,” said James Mallet, an evolutionary biologist at Harvard University. “It’s just that usually we don’t recognize the divergent lineages that are appearing as separate species.”

He continued, “I believe that speciation is more of a continuum than people have been thinking.” At one extreme, species can be cleanly divided from one another, with no interbreeding and exchange of genes. But to greater or lesser degrees, hybridization could also allow genes to flow into a species from others, and the resulting hybrids could sometimes develop an identity of their own, even if only temporarily.

Mallet argues that hybrid species could very well be appearing all the time, only to collapse and disappear just as quickly, with the hybrids either going extinct or being absorbed back into a parent population. “To me, that shows the abundance of opportunities for speciation,” he said. Just because many hybrids go extinct — a fate that is still very likely to befall the Big Bird lineage, according to Mallet — doesn’t mean that hybridization is not a real source of new species in nature.

Instead, he sees speciation as an almost fractal process that can be observed in ecosystems over different timescales. “If you look at the macro level over millions of years,” Mallet said, “you’ll see a few species evolving and going extinct.” But at the micro level, on the order of dozens of years or less, species may be forming and dissolving all the time. Most biologists simply don’t have the long-term data to show it, he said.

What made it possible to identify such an event in Darwin’s finches was the Grants’ decades of careful fieldwork, followed by detailed genomics studies. The new paper illustrates “the value of continuous, long-term studies in nature,” Peter Grant said. Without that, “we would not have detected or been able to interpret the immigration of an individual of one species and interbreeding with a member of the resident species.”

Although biologists do not yet know how much animal hybrid speciation occurs outside the Galápagos, they are becoming increasingly aware of hybrids as a powerful agent of evolution. “I think this paper really increases that signal,” Edwards said. “Researchers like me are going to be looking for it much more regularly.”

]]>https://www.quantamagazine.org/new-bird-species-arises-from-hybrids-as-scientists-watch-20171213/feed/0Neutrinos Suggest Solution to Mystery of Universe’s Existencehttps://www.quantamagazine.org/neutrinos-suggest-solution-to-mystery-of-universes-existence-20171212/
https://www.quantamagazine.org/neutrinos-suggest-solution-to-mystery-of-universes-existence-20171212/#respondTue, 12 Dec 2017 15:36:44 +0000https://www.quantamagazine.org/?p=53754From above, you might mistake the hole in the ground for a gigantic elevator shaft. Instead, it leads to an experiment that might reveal why matter didn’t disappear in a puff of radiation shortly after the Big Bang.

I’m at the Japan Proton Accelerator Research Complex, or J-PARC — a remote and well-guarded government facility in Tokai, about an hour’s train ride north of Tokyo. The experiment here, called T2K (for Tokai-to-Kamioka) produces a beam of the subatomic particles called neutrinos. The beam travels through 295 kilometers of rock to the Super-Kamiokande (Super-K) detector, a gigantic pit buried 1 kilometer underground and filled with 50,000 tons (about 13 million gallons) of ultrapure water. During the journey, some of the neutrinos will morph from one “flavor” into another.

In this ongoing experiment, the first results of which were reported last year, scientists at T2K are studying the way these neutrinos flip in an effort to explain the predominance of matter over antimatter in the universe. During my visit, physicists explained to me that an additional year’s worth of data was in, and that the results are encouraging.

According to the Standard Model of particle physics, every particle has a mirror-image particle that carries the opposite electrical charge — an antimatter particle. When matter and antimatter particles collide, they annihilate in a flash of radiation. Yet scientists believe that the Big Bang should have produced equal amounts of matter and antimatter, which would imply that everything should have vanished fairly quickly. But it didn’t. A very small fraction of the original matter survived and went on to form the known universe.

Researchers don’t know why. “There must be some particle reactions that happen differently for matter and antimatter,” said Morgan Wascko, a physicist at Imperial College London. Antimatter might decay in a way that differs from how matter decays, for example. If so, it would violate an idea called charge-parity (CP) symmetry, which states that the laws of physics shouldn’t change if matter particles swap places with their antiparticles (charge) while viewed in a mirror (parity). The symmetry holds for most particles, though not all. (The subatomic particles known as quarks violate CP symmetry, but the deviations are so small that they can’t explain why matter so dramatically outnumbers antimatter in the universe.)

Last year, the T2K collaboration announced the first evidence that neutrinos might break CP symmetry, thus potentially explaining why the universe is filled with matter. “If there is CP violation in the neutrino sector, then this could easily account for the matter-antimatter difference,” said Adrian Bevan, a particle physicist at Queen Mary University of London.

Researchers check for CP violations by studying differences between the behavior of matter and antimatter. In the case of neutrinos, the T2K scientists explore how neutrinos and antineutrinos oscillate, or change, as the particles make their way to the Super-K detector. In 2016, 32 muon neutrinos changed to electron neutrinos on their way to Super-K. When the researchers sent muon antineutrinos, only four became electron antineutrinos.

That result got the community excited — although most physicists were quick to point out that with such a small sample size, there was still a 10 percent chance that the difference was merely a random fluctuation. (By comparison, the 2012 Higgs boson discovery had less than a 1-in-1 million probability that the signal was due to chance.)

This year, researchers collected nearly twice the amount of neutrino data as last year. Super-K captured 89 electron neutrinos, significantly more than the 67 it should have found if there was no CP violation. And the experiment spotted only seven electron antineutrinos, two fewer than expected.

Researchers aren’t claiming a discovery just yet. Because there are still so few data points, “there’s still a 1-in-20 chance it’s just a statistical fluke and there isn’t even any violation of CP symmetry,” said Phillip Litchfield, a physicist at Imperial College London. For the results to become truly significant, he added, the experiment needs to get down to about a 3-in-1000 chance, which researchers hope to reach by the mid-2020s.

But the improvement on last year’s data, while modest, is “in a very interesting direction,” said Tom Browder, a physicist at the University of Hawaii. The hints of new physics haven’t yet gone away, as we might expect them to do if the initial results were due to chance. Results are also trickling in from another experiment, the 810-kilometer-long NOvA at the Fermi National Accelerator Laboratory outside Chicago. Last year it released its first set of neutrino data, with antineutrino results expected next summer. And although these first CP-violation results will also not be statistically significant, if the NOvA and T2K experiments agree, “the consistency of all these early hints” will be intriguing, said Mark Messier, a physicist at Indiana University.

A planned upgrade of the Super-K detector might give the researchers a boost. Next summer, the detector will be drained for the first time in over a decade, then filled again with ultrapure water. This water will be mixed with gadolinium sulfate, a type of salt that should make the instrument much more sensitive to electron antineutrinos. “The gadolinium doping will make the electron antineutrino interaction easily detectable,” said Browder. That is, the salt will help the researchers to separate antineutrino interactions from neutrino interactions, improving their ability to search for CP violations.

“Right now, we are probably willing to bet that CP is violated in the neutrino sector, but we won’t be shocked if it is not,” said André de Gouvêa, a physicist at Northwestern University. Wascko is a bit more optimistic. “The 2017 T2K result has not yet clarified our understanding of CP violation, but it shows great promise for our ability to measure it precisely in the future,” he said. “And perhaps the future is not as far away as we might have thought last year.”

]]>https://www.quantamagazine.org/neutrinos-suggest-solution-to-mystery-of-universes-existence-20171212/feed/0The (Math) Problem With Pentagonshttps://www.quantamagazine.org/the-math-problem-with-pentagons-20171211/
https://www.quantamagazine.org/the-math-problem-with-pentagons-20171211/#respondMon, 11 Dec 2017 15:27:37 +0000https://www.quantamagazine.org/?p=53751Children’s blocks lie scattered on the floor. You start playing with them — squares, rectangles, triangles and hexagons — moving them around, flipping them over, seeing how they fit together. You feel a primal satisfaction from arranging these shapes into a perfect pattern, an experience you’ve probably enjoyed many times. But of all the blocks designed to lie flat on a table or floor, have you ever seen any shaped like pentagons?

People have been studying how to fit shapes together to make toys, floors, walls and art — and to understand the mathematics behind such patterns — for thousands of years. But it was only this year that we finally settled the question of how five-sided polygons “tile the plane.” Why did pentagons pose such a big problem for so long?

To understand the problem with pentagons, let’s start with one of the simplest and most elegant of geometric structures: the regular tilings of the plane. These are arrangements of regular polygons that cover flat space entirely and perfectly, with no overlap and no gaps. Here are the familiar triangular, square and hexagonal tilings. We find them in floors, walls and honeycombs, and we use them to pack, organize and build things more efficiently.

These are the easiest tilings of the plane. They are “monohedral,” in that they consist of only one type of polygonal tile; they are “edge-to-edge,” meaning that corners of the polygons always match up with other corners; and they are “regular,” because the one tile being used repeatedly is a regular polygon whose side lengths are all the same, as are its interior angles. Our examples above use equilateral triangles (regular triangles), squares (regular quadrilaterals) and regular hexagons.

Remarkably, these three examples are the only regular, edge-to-edge, monohedral tilings of the plane: No other regular polygon will work. Mathematicians say that no other regular polygon “admits” a monohedral, edge-to-edge tiling of the plane. And this far-reaching result is actually quite easy to establish using only two simple geometric facts.

First, there’s the fact that in a polygon with n sides, where n must be at least 3, the sum of an n-gon’s interior angles, measured in degrees, is

$latex S_n = 180(n – 2)$

This is true for any polygon with n sides, regular or not, and it follows from the fact that an n-sided polygon can be divided into (n − 2) triangles, and the sum of the measures of the interior angles of each of those (n − 2) triangles is 180 degrees.

Second, we observe that the angle measure of a complete trip around any point is 360 degrees. This is something we can see when perpendicular lines intersect, since 90 + 90 + 90 + 90 = 360.

What do these two facts have to do with the tiling of regular polygons? By definition, the interior angles of a regular polygon are all equal, and since we know the number of angles (n) and their sum (180(n − 2)), we can just divide to compute the measure of each individual angle.

$latex \theta_n = \frac {180(n-2)}{n}$

We can make a chart for the measure of an interior angle in regular n-gons. Here they are up to n = 8, the regular octagon.

n

Name

Sum of interior angles

One interior angle

3

Equilateral triangle

180

60

4

Square

360

90

5

Pentagon

540

108

6

Hexagon

720

120

7

Heptagon

900

$latex 128 \frac {4}{7}$

8

Octagon

1080

135

This chart raises all sorts of interesting mathematical questions, but for now we just want to know what happens when we try to put a bunch of the same n-gons together at a point.

For the equilateral-triangle tiling, we see six triangles coming together at each vertex. This works out perfectly: The measure of each internal angle of an equilateral triangle is 60 degrees, and 6 × 60 = 360, which is exactly what we need around a single point. Similarly for squares: Four squares around a single point at 90 degrees each gives us 4 × 90 = 360.

But starting with pentagons, we run into problems. Three pentagons at a vertex gives us 324 degrees, which leaves a gap of 36 degrees that is too small to fill with another pentagon. And four pentagons at a point produces unwanted overlap.

No matter how we arrange them, we’ll never get pentagons to snugly match up around a vertex with no gap and no overlap. This means the regular pentagon admits no monohedral, edge-to-edge tiling of the plane.

A similar argument will show that after the hexagon — whose 120-degree angles neatly fill 360 degrees — no other regular polygon will work: The angles at each vertex simply won’t add up to 360 as required. And with that, the regular, monohedral, edge-to-edge tilings of the plane are completely understood.

Of course, that’s never enough for mathematicians. Once a specific problem is solved, we start to relax the conditions. For example, what if we don’t restrict ourselves to regular polygonal tiles? We’ll stick with “convex” polygons, those whose interior angles are each less than 180 degrees, and we’ll allow ourselves to move them around, rotate them and flip them over. But we won’t assume the side lengths and interior angles are all the same. Under what circumstances could such polygons tile the plane?

For triangles and quadrilaterals, the answer is, remarkably, always! We can rotate any triangle 180 degrees about the midpoint of one of its sides to make a parallelogram, which tiles easily.

A similar strategy works for any quadrilateral: Simply rotate the quadrilateral 180 degrees around the midpoint of each of its four sides. Repeating this process builds a legitimate tiling of the plane.

Thus, all triangles and quadrilaterals — even irregular ones — admit an edge-to-edge monohedral tiling of the plane.

But with irregular pentagons, things aren’t so simple. Our experience with irregular triangles and quadrilaterals might seem to give cause for hope, but it’s easy to construct an irregular, convex pentagon that does not admit an edge-to-edge monohedral tiling of the plane.

For example, consider the pentagon below, whose interior angles measure 100, 100, 100, 100 and 140 degrees. (It may not be obvious that such a pentagon can exist, but as long as we don’t put any restrictions on the side lengths, we can construct a pentagon from any five angles whose measures sum to 540 degrees.)

The pentagon above admits no monohedral, edge-to-edge tiling of the plane. To prove this, we need only consider how multiple copies of this pentagon could possibly be arranged at a vertex. We know that at each vertex in our tiling the measures of the angles must sum to 360 degrees. But it’s impossible to put 100-degree angles and 140-degrees angles together to make 360 degrees: You can’t add 100s and 140s together to get exactly 360.

Angle Combinations

Deficit

140 + 140 = 280

80

140 + 100 + 100 = 340

20

100 + 100 + 100 = 300

60

No matter how we try to put these pentagonal tiles together, we’ll always end up with a gap smaller than an available angle. Constructing an irregular pentagon in this way shows us why not all irregular pentagons can tile the plane: There are certain restrictions on the angles that not all pentagons satisfy.

But even having a set of five angles that can form combinations that add up to 360 degrees is not enough to guarantee that a given pentagon can tile the plane. Consider the pentagon below.

This pentagon has been constructed to have angles of 90, 90, 90, 100 and 170 degrees. Notice that every angle can be combined with others in some way to make 360 degrees: 170 + 100 + 90 = 360 and 90 + 90 + 90 + 90 = 360.

The sides have also been constructed in a particular way: the lengths of AB, BC, CD, DE and EA are 1, 2, 3, x and y, respectively. We can calculate x and y, but it’s enough to know that they’re messy irrational numbers and they’re not equal to 1, 2 or 3, or to each other. This means that when we attempt to create an edge-to-edge tiling of the plane, every side of this pentagon has only one possible match from another tile.

Knowing this, we can quickly determine that this pentagon admits no edge-to-edge tiling of the plane. Consider the side of length 1. Here are the only two possible ways of matching up two such pentagons on that side.

The first creates a gap of 20 degrees, which can never be filled. The second creates a 100-degree gap. We do have a 100-degree angle to work with, but because of the edge restriction on the y side, we have only two options.

Neither of these arrangements generates valid edge-to-edge tilings. Thus, this particular pentagon cannot be used in an edge-to-edge tiling of the plane.

We’re starting to see that complicated relationships among the angles and sides make monohedral, edge-to-edge tilings with pentagons particularly complex. We need five angles, each of which can combine with copies of itself and the others to sum to 360. But we also need five sides that will fit together with those angles. Further complicating matters, a pentagon’s sides and angles aren’t independent: Setting restrictions on the angles creates restrictions for the side lengths, and vice versa. With triangles and quadrilaterals everything always fits, but when it comes to pentagons, it’s a balancing act to get everything to work out just right.

Things get trickier as we relax more conditions. When we remove the edge-to-edge restriction, we open up a whole new world of tilings. For example, a simple 2-by-1 rectangle only admits one edge-to-edge tiling of the plane, but it admits infinitely many tilings of the plane that aren’t edge-to-edge!

With pentagons, this adds another dimension of complexity to the already complex problem of finding the right combination of sides and angles. That’s partly why it took 100 years, multiple contributors and, in the end, an exhaustive computer search to settle the question. The 15 types of convex pentagons that admit tilings (not all edge-to-edge) of the plane were discovered by Karl Reinhardt in 1918, Richard Kershner in 1968, Richard James in 1975, Marjorie Rice in 1977, Rolf Stein in 1985, and Casey Mann, Jennifer McLoud-Mann and David Von Derau in 2015. And it took another mathematician in 2017, Michaël Rao, to computationally verify that no other such pentagons could work. Together with other existing knowledge, like the fact that no convex polygon with more than six sides can tile the plane, this finally settled an important question in the mathematical study of tilings.

When it comes to tiling the plane, pentagons occupy an area between the inevitable and the impossible. Having five angles means the average angle will be small enough to give the pentagon a chance at a perfect fit, but it also means that enough mismatches among the sides could exist to prevent it. The simple pentagon shows us that, even after thousands of years, questions about tilings still excite, inspire and astound us. And with many open questions remaining in the field of mathematical tilings — like the search for a hypothetical concave “einstein” shape that can only tile the plane nonperiodically — we’ll probably be putting the pieces together for a long time to come.

]]>https://www.quantamagazine.org/the-math-problem-with-pentagons-20171211/feed/0Solution: ‘Triumph or Cooperation in Game Theory and Evolution’https://www.quantamagazine.org/solution-to-game-theory-and-evolution-puzzle-20171208/
https://www.quantamagazine.org/solution-to-game-theory-and-evolution-puzzle-20171208/#respondFri, 08 Dec 2017 16:20:27 +0000https://www.quantamagazine.org/?p=53678Our November Insights puzzle set out three scenarios exploring how competition and cooperation are modeled in game theory and how they might actually interact in modifying the equilibrium between two genes. Let’s work through them to gain a deeper appreciation for the intricacies in applying game theory to real-world situations.

Problem 1

Morra is a competitive hand-and-finger game played between two opponents. It begins like Rock-Paper-Scissors, with both players concealing their hands. At a prearranged signal, both players simultaneously show their hands, which reveal one to five outstretched fingers. In some advanced variants of Morra, you have to guess how many fingers your opponent will show, but for our puzzle, we will restrict our attention to a simpler version called Odds and Evens.

In Odds and Evens one of the players is designated Odd, and the other Even. In our variant both players may choose to show either a single outstretched index finger, or the entire palm with a folded thumb, thus showing four fingers. The sum of the number of fingers shown by both players decides who wins and the number of points the winner gets. Thus, if the first player shows one finger and the other shows four, the sum is five, so Odd wins and gets 5 points, and so on.

Imagine two new players trying their hand (so to speak) at this game. Even reasons as follows: “The game obviously gives both players even chances. In four rounds, on average, I will win either 2 or 8 points, for a total of 10, while Odd will win 5 points two times, also for a total of 10.” So, leaving all to chance, he went ahead and showed one finger half the time and four fingers half the time, at random. Odd, on the other, ahem, hand, thought: “I think there’s something odd about this game. I’m going to mix it up and randomly play one finger three-fifths of the time, and four fingers two-fifths of the time.”

Who wins in the above game? Why does this happen, even though the game looks symmetric? Does the winner have a better strategy?

Solution 1

With the strategies as stated, Odd wins in the long run. Let’s tabulate all the scenarios that can happen in a series of 20 games that cover all possibilities with the specified frequencies.

No. of
games

Odd
plays

Even
plays

Total

Odd
points

Even
points

6

1

1

2

0

12

6

1

4

5

30

0

4

4

1

5

20

0

4

4

4

8

0

32

Total

50

44

So in the long run, Odd is up 6 points in 20 games, giving an average win of 0.3 points per game. The game is symmetric only if both players play each alternative half of the time, but Odd deviates from this, reducing the possibility of Even winning big and increasing the possibility of Even winning small, thus winning more points even though each player still wins half of the games.

But as Mark Pearson pointed out, Even can observe Odd’s strategy and change his own to get better results: By playing four fingers all the time, Even wins by 0.2 points per game. In turn, Odd can adapt her strategy. Will this cat-and-mouse game ever end? It can, if Odd discovers the best strategy, one that yields the vaunted Nash equilibrium.

This equilibrium strategy, as nightrider pointed out, requires Odd to play, randomly, one finger 13 out of 20 times and four fingers seven out of 20 times. Let’s see what happens when Even tries his previous two strategies against this one.

If Even uses the 50-50 one finger/four fingers strategy, in 40 games, Odd will win 20 games with 5 points each to get 100 points, while Even will win 13 games with 2 points and seven games with 8 points, getting 82 points. Odd accumulates an average of 0.45 extra points per game.

If, on the other hand, Even uses the “four finger always” strategy, Odd wins 26 games at 5 points each to get 130 points, while Even will win 14 games at 8 points each to get 112 points. Odd again defeats Even by an average of 0.45 points per game.

In fact, as you can verify, no matter what strategy Even employs, Odd always does better by an average of 0.45 points per game. That’s the beauty of the Nash equilibrium. Odd has a stranglehold that cannot be broken (and reciprocally, Even has his own strategy that guarantees that he cannot lose by more than this).

How do we find this wonderful Nash equilibrium strategy? As nightrider pointed out, we have to find the point where the partial derivatives of the payoff with respect to both player’s probabilities of displaying a given number of fingers become zero. For a simplified way to get to this answer, let p = the probability of Odd playing 1, and q = the probability of Even playing 1. The expected winnings for Odd per round are:

–2pq + 5(1 – p)q + 5p(1 – q) – 8(1 – p)(1 – q)

This simplifies to 13p + 13q – 20pq – 8.

Now, Odd is looking for a strategy that she can use no matter what Even does. Let’s assume that there is such a strategy that Odd can always use, so p is a constant.

Then the above expression becomes q(13 – 20p) – (8 – 13p). Notice that if we make the first part of the expression equal to zero, then Odd’s expected winnings will become constant, which means that Even will not be able to lower Odd’s winnings by changing q. This happens when 13 – 20p is 0 or p = 13/20, which is the Nash equilibrium, as we verified above. The second part of the expression, 13p – 8, simplifies to 169/20 – 8 = 0.45, which gives Odd’s expected winnings for any value of q. (Mathematically, the above procedure is nothing but a simple equivalent of setting the derivative of our linear expression to zero.)

Problem 2

Amy and Bob are a pair of young twins who, like siblings everywhere, fight a lot and love cake. Their mother frequently bakes a cake that she distributes to them in the following way. She talks independently to each twin and asks about the other twin’s behavior. If neither of them has any complaints, each of them gets half the cake. If only one of them reports a valid infraction by the other, that person gets three-quarters of the cake, the other gets none, and Mom gets the remaining quarter. If both of them report valid infractions, they each get only one-quarter of the cake and Mom gets the remaining half.

A) What is the best strategy for Amy and Bob if they do not trust each other?

B) What is the best strategy for them if, on the other hand, they do trust each other?

C) If there are 100 such events, and you know the total amount of cake that was consumed by the twins, when can you say that there was more cooperation than betrayal between them and vice versa?

D) As an aside, the mother’s behavior in this example is interesting. How would you quantify the value she places on various factors like fostering trust, reward and punishment, and her own fondness for cake?

Solution 2

If the twins distrust each other, each knows that the other will rat them out on the slightest pretext. Therefore, each one should complain about the other. Both will get only one-quarter of the cake, but that will avoid the worst-case scenario of not complaining and getting nothing. It becomes a competitive game, and this solution is the Nash equilibrium.

If the twins trust each other, their best policy is to overlook the other’s infractions, if any, and not complain. That way they both get half the cake. This is a Pareto-optimal solution, as mentioned in the original puzzle, and it is also equitable. If we treat the amount of cake that the twins eat as a “common good” from their perspective, then this solution also maximizes the common good.

As Mark Pearson wrote: “If both trust/collaborate and say ‘no complaints,’ they get a whole cake between them. If both twins betray each other, they get half a cake between them. If there were 100 possible cakes available (100 repeated events), then if the number of cakes the twins get to eat is closer to 100 than it is to 50 (half of 100 cakes), I’d say that there was more cooperation than betrayal. In other words, more than 75 cakes is more cooperation than betrayal.”

Note that this question was asked from a historic perspective. In a competitive, adversarial game that recurs only a limited number of times, trust cannot be fostered, as nightrider remarked. In real life, this is the reason to avoid fly-by-night operations: Without open-ended repeated interactions, cheating goes unpunished.

The mother prizes trust and cooperation infinitely more than her liking for cake, as she is willing to forgo cake entirely to achieve it. If trust and cooperation are breached, then she does want to hear about unilateral infractions, if any. She prefers this twice as much as the reporting of bilateral infractions, based solely on the assumption that twice the helping of cake soothes her twice as much. A lot of wrinkles can be added to this, but that would require more knowledge about what exactly is considered a reportable infraction.

As Mark Pearson commented, the problem’s settings do not explicitly reward honesty. For better parenting, Mark came up with an interesting alternate reward structure that separately rewards honesty and good behavior. As you can see, this simple problem can become highly complex when we take it to the real world.

So, in general, the Nash equilibrium is the best solution in competitive situations between entities that are entirely motivated by the game’s obvious payoff. In situations where scenarios are repetitive and trust can be fostered, however, other solutions might be more rewarding to real-world participants, and the Nash equilibrium ceases to be optimal. Most human beings are not motivated by a single kind of reward, so real-world situations will always have extraneous motivations such as fairness or group allegiance that do not conform to the game-theory assumption of constant single-factor self-utility,. Furthermore, for most people there is a physiological and psychological price to repeated conflict (and a corresponding reward for peace) that may not be taken into account in simple game-theory models. Perhaps this is the reason why, as Robert Karl Stonjek mentioned, referencing a paper on the subject, simple game-theory models do not work as predictors of ordinary human behavior. Gametheoryman made a spirited defense of game theory, citing “indirect reciprocity” games. The results mentioned do seem to fit our intuitions about reputation and honor. But as we saw above, humans have all kinds of motivations, including prizing group above self, and in such complexity, Nash equilibriums may not even be reachable or relevant. Moreover, the complexity of human society is immense. For the relatively simple game of three-finger Morra, there is a way to find the optimal strategy known as the Brown procedure, but it requires many tens of thousands of iterations. The complexity of real-world situations dwarfs this game by many orders of magnitude.

As I said in a comment originally made in reply to nightrider, game theory may be able to point to general principles, but the subject material it is applied to is far too complex, and game-theory basics are still far from being fully explicated. Take my analogy of the theory of gravitation: If we lived in a system of six suns, as imagined by Isaac Asimov in his classic story “Nightfall,” their motions would be far too difficult for us to predict in practice using analytical methods, even if we were fully proficient in the theory of gravitation. We could only get some idea in understanding short-term trends in simple situations. Now imagine applying game theory to real-world populations with hundreds of players, each competitive with some, cooperative with others, forming shifting groups of variable sizes, and with some engaging in zero-sum games and others in nonzero-sum ones. There is no way that simplistic game-theory principles can be applied to such situations predictively. The only thing that could come close is a supercomputer simulation that is, firstly, based on a fully developed game theory (that does not exist yet); secondly (and far more importantly), it would need to be based on measured real-world data of the strength and polarity of every pairwise interaction, the size and strength of every shifting alliance, whether it was zero-sum or not, and probably many other factors.

Problem 3

Imagine a pair of alleles A and a that exist in equilibrium at a ratio of 0.6 to 0.4 under normal conditions, in a species that lives for a year and reproduces once a year. The allele A is dominant, so both AA and Aa individuals have similar physical characteristics. A constant allele ratio is generally maintained in the long run by “push-pull” mechanisms in nature. There may be some environmental factors that favor individuals carrying the A allele (AA’s and Aa’s) and would, if unchecked, increase its proportion, whereas other factors would tend to favor the a allele and resist A’s increase. For simplicity, let us assume that such factors occur serially. Assume that under normal circumstances, without any segregation distortion, you have three years during which the environment is such that the A allele is favored. Both AA and Aa individuals have a certain survival/reproductive advantage over aa individuals, and this causes the A allele to increase its proportion by 10 percent in the first year, rising to 0.66. The same degree of advantage is present in the second and third years, allowing the proportion of the A allele to rise further. However, in the fourth year the conditions change and the allele ratio falls back again to the equilibrium value. This happens because aa individuals are favored in the fourth year, and extra copies of the a allele survive and find their way to the next generation. The advantage to aa individuals in the fourth year is proportional to the square of the difference in their numbers from the equilibrium value of 0.16. As an example, if the proportion of aa individuals is 0.12 at the start of the fourth year, the advantage they possess will be four times what they would have had if their proportion had been 0.14. Thus the “force” pulling the gene ratio back to equilibrium increases up to a maximum, the more the ratio deviates from it.

Now let’s say that allele A manages to distort segregation so that 60 percent of the copies of A genes go into the next generation in an Aa individual instead of 50. What would the new equilibrium ratio be? What proportion of A’s cheating will the above mechanism let it get away with?

This kind of selfishness by genes is, nevertheless, quite rare. There are thousands of dominant genes that have survived in the human genome for millennia. Clearly, there are strong mechanisms by which Mendel’s law of segregation (that decrees equal access to gametes by allelic pairs) is enforced. One possibility is that if one gene can cheat, so can its alleles, restoring equilibrium. If segregation does get distorted, the simple “push-pull” mechanism I described is unable to resist it by itself. A stronger push-pull mechanism proportional to the fifth or higher power, rather than the square of the distance from equilibrium, would be necessary in this case. Note that if a cheating gene prevails, the fitness of the entire species is decreased, and in extreme cases this could lead to extinction.

As usual, there were some great comments from readers. I enjoyed the dialogue between Robert Karl Stonjek and gametheoryman, as well as nightrider’s many mathematically accurate contributions. The Quanta T-shirt for this month goes to Mark Pearson. Thanks to all who contributed.

There will be no Insights column this month. Happy holidays to everyone, and see you next year for new insights.

]]>https://www.quantamagazine.org/solution-to-game-theory-and-evolution-puzzle-20171208/feed/0Mathematicians Crack the Cursed Curvehttps://www.quantamagazine.org/mathematicians-crack-the-cursed-curve-20171207/
https://www.quantamagazine.org/mathematicians-crack-the-cursed-curve-20171207/#respondThu, 07 Dec 2017 15:44:10 +0000https://www.quantamagazine.org/?p=53675Mathematical proofs are elaborate theoretical arguments that often say little about actual numbers and calculations — the concrete values non-mathematicians think of as “solving a math problem.” Occasionally, though, theoretical proofs lead to explicit results. This was the case with an exciting sequence of events that culminated last month.

The story takes place in the mathematical field of number theory. The theoretical side involves some intriguing new ideas from Minhyong Kim, a mathematician at the University of Oxford.

As I explained in a recent article, Kim works in a highly abstract area of mathematics, but the goal of his work is actually quite straightforward: to find a method for identifying all the rational solutions to particular kinds of equations.

The rational numbers, remember, consist of all the numbers that can be written as a fraction. So for the equation x2 + y2 = 1, one rational solution is x = 3/5 and y = 4/5.

The problem Kim is wrestling with dates all the way back to Diophantus of Alexandria, who studied such “Diophantine equations” in the third century A.D. The most significant recent result on the topic provided an important but blunt reframing of the problem: In 1986, Gerd Faltings won the Fields Medal, math’s highest honor, primarily for proving that certain classes of Diophantine equations have only finitely many rational solutions (rather than infinitely many).

Faltings’ proof was what mathematicians call “ineffective,” meaning that it didn’t actually count the number of rational solutions, let alone identify them. The vast majority of proofs in number theory are similarly ineffective. This kind of proof arises especially often when mathematicians argue by contradiction: If there are infinitely many rational solutions, then you get a logical contradiction; therefore, there must be only finitely many rational solutions. (And good luck finding them.)

Since Faltings’ result, mathematicians have looked for “effective” methods for finding rational solutions to Diophantine equations, and Kim has one of the most promising new ideas. Drawing inspiration from physics, he thinks of rational solutions to equations as being somehow the same as the path that light travels between two points.

Kim argues that it should be possible to start with a Diophantine equation and construct some other geometric object based on it (called a Selmer variety), such that the equation and the new object intersect at precisely the points that represent rational solutions. Kim has been developing this idea for more than a decade, and mathematicians have been curious to see whether it will really work in practice.

Last month a team of mathematicians — Jennifer Balakrishnan, Netan Dogra, J. Steffen Müller, Jan Tuitman and Jan Vonk — identified the rational solutions for a famously difficult Diophantine equation known as the “cursed curve.” The curve’s importance in mathematics stems from a question raised by the influential mathematician Jean-Pierre Serre in 1972. Mathematicians have made steady progress on Serre’s question over the last 40-plus years, but it involves an equation they just couldn’t handle — the cursed curve.

In 2002 the mathematician Steven Galbraith identified seven rational solutions to the cursed curve, but a harder and more important task remained: to prove that those seven are the only ones (or to find the rest if there are in fact more).

The authors of the new work followed Kim’s general approach. They constructed a specific geometric object that intersects the graph of the cursed curve at exactly the points associated to rational solutions. “Minhyong does very foundational theoretical work in his papers. We’re translating the objects in Kim’s work into structures we can turn into computer code and explicitly calculate,” said Balakrishnan, a mathematician at Boston University. The process proved that those seven rational solutions are indeed the only ones.

Kiran Kedlaya, a mathematician at the University of California, San Diego, calls the paper a “watershed moment” in the study of Diophantine equations. In addition to proving results for the cursed curve, this new work establishes something else: that Kim’s method really works. It remains to be seen which equations it will take down next.