Abstract. In this talk we will look at various examples of classification problems in symplectic linear algebra: conjugacy classes in the symplectic group and its Lie algebra, linear lagrangian relations up to conjugation, tuples of (co)isotropic subspaces. I will explain how many such problems can be encoded using the theory of symplectic poset representations, and will discuss some general results of this theory. Finally, I will recast this discussion from a broader category-theoretic perspective.

Lorand is working on a paper with Weinstein and Christian Herrman that delves deeper into these topics. I first met him at the ACT2018 school in Leiden, where we worked along with Blake Pollard, Fabrizio Genovese (shown below) and Maru Sarazola on biochemical coupling through emergent conservation laws. Right now he’s visiting UCR and working with me to dig deeper into these questions using symplectic geometry! This is a very tantalizing project that keeps on not quite working…

On month ago, we laughed about Elon Musk's new Hyperloop science-fiction futuristic mega-invention which turned out to be... a tiny useless road tunnel. Well, to make it more impressive, he has also built car elevators for cars to get there, so that the traffic through the tunnel is even slower than it would otherwise be. That one-mile tunnel of diameter around 4 meters had cost about $10 million.

Elon has bragged that he could save 99% of the expenses which is completely ludicrous because he just bought a boring machine, told his employees to read the instruction manual, and they did exactly what anyone else does with a boring machine. So as long as as the people and utilities and others are paid adequately and one compares tunnels with the same internal equipment, or the lack of it, they will cost almost exactly the same.

In a recent interview, a smart Elon Musk religious disbeliever nicknamed Montana Skeptic has quoted someone else – sorry, I forgot whom – who has said that Elon Musk's comments look intelligent to you before he starts to talk about a field that you understand. Ladies and Gentlemen in particle physics, here you have the opportunity to evaluate the validity of that quote because... Elon Musk has descended from Heaven and visited mortals in particle physics.

The stupidity in his tweet has exceeded my expectations. First of all, he responded to an MIT Technology Review tweet that has talked about the plans to build the FCC collider, a next-generation accelerator in a 100-kilometer tunnel, that would start as an electron-positron collider (just like LEP on steroids) and later be converted to a proton-proton collider (just like the LHC on steroids).

Now, if Elon Musk were willing to sacrifice at least 20 seconds of his heavenly time by reading the first two sentences of the MIT Technology Review popular article that was quoted in the tweet that he responded to, he would have known that no one at CERN is planning a "new LHC tunnel", as Musk wrote, and his response could have been significantly less dumb.

Instead, the LHC (The Large Hadron Collider) is a particular experimental device which has its own particular tunnel (just like Elon Musk has his particular body). If a different tunnel (body) were built, it would be a different entity, just like Larry Elison isn't just "another body of Elon Musk". It wouldn't be the LHC anymore. It would be the FCC. The tunnel of the FCC wouldn't be just new. It would be different – and almost 4 times longer – than the LHC tunnel. If the same tunnel as the LHC tunnel were enough for the next generation of the CERN colliders, CERN could just use the LHC tunnel tunnel itself – the same one – just like when the LEP tunnel was recycled to build the LHC.

I think it's not just a funny terminological typo. Musk must honestly misunderstand that CERN isn't building useless tunnels just for fun – which is what he has done under L.A. so he probably assumes that CERN is as "exuberant" as he is, if I have to avoid the term "idiot". CERN is thinking about a new tunnel because a new tunnel is actually needed for an experiment that would accelerate the protons to a much higher energy. And physicists want a higher energy because they want to probe phenomena in certain conditions that haven't been probed before – and that novelty is needed in scientific research. In his business, if a man wants to be celebrated as a futuristic inventor by his fans of the appropriate intelligence, more precisely the lack of it, it's enough to build the same tunnels as people built 100 years ago. But in particle physics, this kind of exact repetition isn't quite enough for progress.

Fine, we learn that at the Royal Society, Fabiola Giannoti told him:

Ciao Elon, I've heard that you can build tunnels 100 times cheaper than everyone else. Why don't you build a new collider for us?

And Elon replied "why not, we shall save several billions of dollars". Here we're coming to another aspect of his cluelessness – which has already manifested itself during the L.A. stunt. He clearly doesn't understand that a significant fraction of the costs of real-world tunnels isn't the hole itself but some infrastructure that is placed in the hole, some lightning systems, exits, camera systems, whatever. If you avoid the expenses for this "decoration" of the tunnel, you end up with much lower expenses, indeed.

You know, this "not so subtlety" becomes extremely important in particle physics colliders because their tunnels are not just empty holes. They host some of the strongest magnets in the world. They're superconducting magnets kept at very low temperature – in the case of the LHC, it's 1.9 kelvins. That's colder than 2.7 kelvins, the temperature of the cosmic microwave background in the outer space! So the CERN isn't planning to build some minimalistic cheap holes in the Earth! The superconducting magnets cost something, you know. And each major detector at the LHC costs about one billion dollars, too. Relatively to the devices that are placed in the tunnels, the mere holes are just 10% or at most "two dozens of percent" of the expenses.

What are the costs of the bare tunnel needed for the colliders?

If you Google search for these keywords, you will find various sources. Let me pick a bit random 2014 paper so that I get the data from a similarly random place as a Twitter user probably would. On the page 2/10 of the PDF file, you will find out that in the 2014 U.S. dollars (some kind of conversion), one meter of the 3-meter-diameter tunnel costs about $5,000 in the Texan conditions of the SSC (Superconducting Supercollider, Ronald Reagan's project canceled during Bill Clinton's years) and $20,000 in the Alpine rock conditions of CERN (LEP costs from the 1980s increased by the inflation factor of 1.5 or so).

The LHC tunnel has the length of 27,000 meters which translates to $540 million according to the Alpine prices. The FCC tunnel would have the length of 100,000 meters which would translate to $500 million assuming the Texan geology and prices or to $2 billion in the Alpine conditions. Let's generously consider the greatest number among these, $2 billion.

If Elon Musk saves "several billion Euros" and if this phrase means "at least two billion Euros", he will build the tunnel at least for free! And if "several billion" means more than "two billion", then he will build the tunnel and pay billions of Euros to CERN on top of it. Which is exactly what he should do. ;-)

Now, the MIT Technology Review article lists a higher price of the FCC tunnel than what we estimated based on the past tunnels, namely €5 billion (the lepton and later hadron collider would add €4 and €15 billion). The increase may be partly due to using future, less valuable Euros, partly due to a selective higher inflation in the boring industry. But even with this number, it's implausible that Musk's company would save "several billions" because that would mean a saving of 50% or so.

Needless to say, in reality, he can't even save 20% of the costs because Boring Co. is doing almost the same thing as every other boring company. At most, with some adjustments, he could save several hundreds of millions of Euros, but the saving of "billions of Euros" is just a ludicrous statement showing his absolute cluelessness not only in particle physics but also in the boring industry. But even if the competition's costs were €5 billion and his would be €2-3 billion because of some miraculous savings or lower profit margins, I would find it irresponsible for CERN to assign this important project to such an inexperienced boring company, a company led by a man with such a rich track record of broken promises who can't distinguish the LHC and the FCC – and a company that can so easily go out of business. On the other hand, CERN could use the Musk-can-bore argument to push the real builders' price down – but I doubt it would make much impact.

Incidentally, one may look at the prices of the bare tunnels quoted in the arXiv paper and compare the numbers to Musk's statements about the cost of tunnels bored by his competitors. The tunnels with the 3-meter diameter are closer to what he built in L.A. The length was 1,800 meters or so. With the Texan, SSC-like costs of $5,000 per meter in the current money, the cost is $9 million. In the Alpine conditions, it would be four times as expensive, about $36 million.

Because the L.A. geology is arguably closer to Texas, the SSC's boring company would have built his tunnel for $9 million, almost exactly matching Musk's announced cost of $10 million. There is obviously nothing substantial (let alone "99% discounts") that he could have contributed to the construction of tunnels – so he hasn't contributed anything to the construction of tunnels.

Too many people have been turned into complete idiots by decades of anti-scientific propaganda about global warming Armageddons and saviors who save the Earth or the Universe by some totally inconsequential ritual. So they will buy literally any piece of šit, e.g. Elon Musk's claims about his revolutionary construction of tunnels.

Bonus: How much global warming was averted by Tesla

Let me just calculate another number encoding the absolute detachment of Musk's fans from reality. Because of Tesla, he's often painted as a top savior of the Earth who protects it from global warming. How much global warming has been avoided by Tesla cars?

Let's generously assume that all the recently observed warming – by 0.02 °C a year (and I overstated the number because I am generous again) – is due to the man-made CO2 emissions. Let's generously assume that Tesla cars make no CO2 emissions during the production – which is untrue – and that the electricity going to the Tesla cars makes no CO2 emissions either – even though most of it is produced in coal power plants that make about as much emissions as combustion engines.

Tesla has produced about 500,000 cars in its history so far. The total number of cars in the world is about 1.2 billion. I generously used a lower 2014 number by which I overstated Tesla's fraction as 0.5/1,200 = 1/2,400. So 1/2,400 of the cars were made "emissionless", with all my ludicrously generous exaggeration of Tesla's impact.

In 15 years, the global warming would be some 0.3 °C using the rate above, about 20% of it is due to cars, so it's 0.06 °C, and assuming all the cars mentioned above remain active for the same 15 years, Tesla's reduction of the global mean temperature would therefore be 0.06 °C/2,400 = 0.000025 °C. Twenty-five microkelvins.

Even if our beloved planet were threatened by that warming above, and it's not, Elon Musk has prevented twenty-five fudging microkelvins of that warming – and that is a vast overestimate because of all the generosity above. In what system of numbers may such a negative infinitesimal contribution be called "salvation of the planet"? A significant change would have to be at least five orders of magnitude larger. All those people who praise Musk because of "global warming", can't you realize how incredibly moronic you sound?

Just to be sure, the answer to this question obviously is that they don't realize anything. Under Musk's moronic tweet, there are some further responses and quite unsurprisingly, they mostly come from Musk's followers. So we learn that either the colliders are dangerous because they will devour the Earth, or they are Musk's Hyperloop with passengers who happen to be particles. Very clever, indeed.

Hey, you should read this blog post by Tommaso Dorigo. It touches upon many of the myths regarding particle physics, especially the hype surrounding the name "god particle", as if that means something.

I've touched upon some of the issues he brought up. I think many of us who are active online and deal with the media and the public tend to see and observe the same thing, the same mistakes, and misinformation that are being put in print. One can only hope that by repeatedly pointing out such myths and why they are wrong, the message will slowly seep into the public consciousness.

As I mentioned at the weekend, today marks the centenary of the historic first meeting of the Dáil Éireann, at the Mansion House in Dublin on (Tuesday) 21st January 1919. The picture above shows the 27 Teachtaí Dála (TDs) present. The event is being commemorated this afternoon.

I’m summarizing the events surrounding the First Dáil largely because I didn’t learn anything about this at School. Despite Ireland being such a close neighbour, Ireland’s history is only covered in cursory fashion in the British education system.

The background to the First Dáil is provided by the General Election which took place in November 1918 and which led to a landslide victory for Sinn Féin who won 73 seats, and turned the electoral map of Ireland very green, though Unionists held 22 seats in Ulster.

In accordance with its policy of abstentionism, the Sinn Féin MPs refused to take their seats in Westminster and instead decided to form a provisional government in Ireland. In fact 35 of the successful candidates for the General Election were actually in prison, mostly because of their roles in the 1916 Easter Rising and the Ulster Unionists refused to participate, so the First Dáil comprised only 27 members as seen in the picture. It was chaired by Sean T. O’Kelly; Cathal Brugha was elected Speaker (Ceann Comhairle).

It also approved a Democratic Programme, based on the 1916 Proclamation of the Irish Republic, and read and adopted a Message to the Free Nations of the World in Irish, English and French:

On the same day as the first meeting of the Dáil (though the timing appears not to have been deliberate), two members of Royal Irish Constabulary were shot dead by volunteers of the Irish Republication Army in an ambush at Soloheadbeg, Co Tipperary. The IRA squad made off with explosives and detonators intended for use in mining. This is generally regarded as the first incident in the Irish War of Independence. The war largely consisted of a guerrilla campaign by the IRA countered by increasingly vicious reprisals by British forces, especially the infamous Black and Tans who quickly became notorious for their brutality and indiscipline.

Following the outbreak of the War of Independence, the British Government decided to suppress the Dáil, and in September 1919 it was prohibited. The Dáil continued to meet in secret, however, and Ministers carried out their duties as best they could.

The War of Independence lasted until the summer of 1921, when it was ended by a truce and the negotiation of the Anglo-Irish Treaty. That, in turn, triggered another cycle of violence with the breakout of the Irish Civil War in 1922 between pro-Treaty and anti-Treaty forces and the eventual partition of Ireland into the independent Republic and Northern Ireland which remained part of the United Kingdom.

Puzzle. You measure the energy and frequency of some laser light trapped in a mirrored box and use quantum mechanics to compute the expected number of photons in the box. Then someone tells you that you used the wrong value of Planck’s constant in your calculation. Somehow you used a value that was twice the correct value! How should you correct your calculation of the expected number of photons?

I’ll give away the answer to the puzzle below, so avert your eyes if you want to think about it more.

This scenario sounds a bit odd—it’s not very likely that your table of fundamental constants would get Planck’s constant wrong this way. But it’s interesting because once upon a time we didn’t know about quantum mechanics and we didn’t know Planck’s constant. We could still give a reasonable good description of some laser light trapped in a mirrored box: there’s a standing wave solution of Maxwell’s equations that does the job. But when we learned about quantum mechanics we learned to describe this situation using photons. The number of photons we need depends on Planck’s constant.

And while we can’t really change Planck’s constant, mathematical physicists often like to treat Planck’s constant as variable. The limit where Planck’s constant goes to zero is called the ‘classical limit’, where our quantum description should somehow reduce to our old classical description.

Here’s the answer to the puzzle: if you halve Planck’s constant you need to double the number of photons. The reason is that the energy of a photon with frequency is So, to get a specific total energy in our box of light of a given frequency, we need photons.

So, the classical limit is also the limit where the expected number of photons goes to infinity! As the ‘packets of energy’ get smaller, we need more of them to get a certain amount of energy.

This has a nice relationship to what I’d been doing with geometric quantization last time.

I explained how we could systematically replace any classical system considering by a ‘cloned’ version of that system: a collection of identical copies constrained to all lie in the same state. The set of allowed states is the same, but the symplectic structure is multiplied by a constant factor: the number of copies. We can see this as follows: if the phase space of our system is an abstract symplectic manifold its kth power is again a symplectic manifold. We can look at the image of the diagonal embedding

The image is a symplectic submanifold of and it’s diffeomorphic to but is not a symplectomorphism from to its image. The image has a symplectic structure that’s k times bigger!

What does this mean for physics?

If we act like physicists instead of mathematicians for a minute and keep track of units, we’ll notice that the naive symplectic structure in classical mechanics has units of action: think . Planck’s constant also has units of action, so to get a dimensionless version of the symplectic structure, we should use Then it’s clear that multiplying the symplectic structure by k is equivalent to dividing Planck’s constant by k!

So: cloning a system, multiplying the number of copies by k, should be related to dividing Planck’s constant by k. And the limit should be related to a ‘classical limit’.

Of course this would not convince a mathematician so far, since I’m using a strange mix of ideas from classical mechanics and quantum mechanics! But our approach to geometric quantization makes everything precise. We have a category of classical systems, a category of quantum systems, a quantization functor

and a functor going back:

which reveals that quantum systems are special classical systems. And last time we saw that there are also ways to ‘clone’ classical and quantum systems.

Our classical systems are more than mere symplectic manifolds: they are projectively normal subvarieties of for arbitrary n. So, we clone a classical system by a trick that looks more fancy than the diagonal embedding I mentioned above: we instead use the kth Veronese embedding, which defines a functor

But this cloning process has the same effect on the underlying symplectic manifold: it multiplies the symplectic structure by k.

Similarly, we clone a quantum system by replacing its set of states (the projectivization of the Hilbert space ) by where is the kth symmetric power. This gives a functor

and the ‘obvious squares commute’:

All I’m doing now is giving this math a new physical interpretation: the k-fold cloning process is the same as dividing Planck’s constant by k!

If this seems a bit elusive, we can look at an example like the spin-j particle. In Part 6 we saw that if we clone the state space for the spin-1/2 particle we get the state space for the spin-j particle, where But angular momentum has units of Planck’s constant, and the quantity j is really an angular momentum divided by So this process of replacing 1/2 by k/2 can be reinterpreted as the process of dividing Planck’s constant by k: if Planck’s constant is smaller, we need j to be bigger to get a given angular momentum! And what we thought was a single spin-1/2 particle in the state becomes k spin-1/2 particles in the ‘cloned’ state As explained in Part 6, we can reinterpret this as a state of a single spin-k/2 particle.

Finally, let me point out something curious. We have a systematic way of changing our description of a quantum system when we divide Planck’s constant by an integer. But we can’t do it when we divide Planck’s constant by any other sort of number! So, in a very real sense, Planck’s constant is quantized.

• Part 1: the mystery of geometric quantization: how a quantum state space is a special sort of classical state space.

• Part 2: the structures besides a mere symplectic manifold that are used in geometric quantization.

• Part 3: geometric quantization as a functor with a right adjoint, ‘projectivization’, making quantum state spaces into a reflective subcategory of classical ones.

January 20, 2019

When looking at questions on X validated, I came across this seemingly obvious request for an unbiased estimator of P(X=k), when X~B(n,p). Except that X is not observed but only Y~B(s,p) with s<n. Since P(X=k) is a polynomial in p, I was expecting such an unbiased estimator to exist. But it does not, for the reasons that Y only takes s+1 values and that any function of Y, including the MLE of P(X=k), has an expectation involving monomials in p of power s at most. It is actually straightforward to establish properly that the unbiased estimator does not exist. But this remains an interesting additional example of the rarity of the existence of unbiased estimators, to be saved until a future mathematical statistics exam!

In the early hours of the tomorrow morning (Monday 21st January 2019), people in Ireland and the United Kingdom will be able to see a Total Lunar Eclipse. It will in fact be visible across a large part of the Earth’s surface, from Asia to North America. Around these parts the time when the Moon is fully within the shadow of the Earth is about 4.40am until 5.40am (Irish Time). The Moon will be well over the horizon during totality.

For a combination of reasons this eclipse is being called a Super Blood Wolf Moon. The `Super’ is because the Full Moon will be close to its perigee (and will therefore look a bit bigger than usual). The `Blood’ is because the Moon will turn red during the eclipse, the blue component of light reflected from the Moon’s surface having been scattered by the Earth’s atmosphere. The `Wolf’ is because the first Full Moon of the New Year is, according to some traditions, called a `Wolf Moon’, as it is associated with baying wolves. Other names for first Full Moon of the year include: Ice Moon, Snow Moon, and the Moon after Yule.

Having looked at the Weather forecast for Ireland, however, it seems that instead of a Super Blood Wolf Moon we’re more likely to get a Bloody Clouds No Moon…

I haven’t done a blog post about crosswords for a while so I thought I’d post a quickie about the Christmas Azed Puzzle Competition (No. 2428), the results of which were announced this week. One of the few things I really enjoy about Christmas is that the newspapers have special crossword puzzles that stop me from getting bored with the whole thing. I had saved up a batch of crosswords and gradually worked my way through them during the holiday. I left the Azed puzzle until last because, as you can see from the image above (or from the PDF here) it looks rather complicated. In fact the rubric was so long the puzzle extended across two pages in print edition of the paper. I therefore thought it was fearsome and needed to build up courage to tackle it.

The title `Play-tent’ is a merger of two types of puzzle: `Letters Latent’ (in which solvers have to omit one letter wherever it occurs in the solution to the clue before entering it in the grid) and `Playfair’ which is based on a particular type of cypher. I blogged about the latter type of puzzle here. In this ingenious combination, the letters omitted from the appropriate clues together make up the code phrase required to construct the Playfair cypher grid.

It turned out not to be as hard as it looked, however. I got lucky with the Letters Latent part in that the first four letters I found had to be removed were F, L, K and S. Taking into account the hint in the rubric that the code-phrase consisted of three words of a total of 13 letters from a `familiar seasonal verse’ , I guessed FLOCKS BY NIGHT, which is thirteen letters long and fits the requirement for a Playfair code phrase that no letters are repeated. It was straightforward to check this by looking at the checked lights for the bold-faced clues, the solutions to which were to be entered in encrypted form. Most of these clues were not to difficult to solve for the unencrypted answer (e.g. 18 across is clearly ABELIA, a hidden-word clue). Thus convinced that my guess was right I proceeded to solve the rest of the puzzle. The completed grid, together with the Playfair grid, is shown here:

It took me about 2 hours to solve this completely, which is quite a bit longer than for a `Plain’ Azed puzzle, but it wasn’t anywhere near as hard as I anticipated. People sometimes ask me how to go about solving cryptic crosswords and I have to say that there isn’t a single method: it’s a mixture of deduction, induction, elimination and guesswork.Leibniz was certainly right when he said that “an ingenious conjecture greatly shortens the road”. If you want to learn how to crack these puzzles, I think the only way is by doing lots of them. In that respect they’re a lot like Physics problems!

But solving the puzzle is not all you have to do for the Azed Competition puzzles. You have to supply a clue for a word as well. The rubric here mentions the word three words before the code phrase, i.e. SHEPHERDS. Although I was quite pleased with my clue, I only got a HC in the competition. You can find the prize-winning clues together with comments by Azedhere.

For the record, my clue was:

What’s hot on record? You’ll find pieces written about that in guides!

This parses as H(hot)+EP(record) in SHERDS (word for fragments). The definition here is `guides’, which is a synonym for shepherds (treated as a part of the verb form).

I’ve said before on this blog that I’m definitely better at solving puzzles than setting them, which probably also explains why it takes me so long to compose exam questions!

Anyway, it was an enjoyable puzzle and I look forward to doing the latest Azed crossword later this evening.

Update: today’s Azed Crossword (No. 2432) was quite friendly. I managed to complete it in about half an hour.

I love this sort of reports, because it is based on a material that has been discovered for a long time and rather common, it is based on a consequence of a theory, it has both direct applications and a rich physics, and finally, it has an amazing resemblance to what many physics students have seen in textbooks.

Researchers led by Michael Hoffmann have now measured the double-well energy landscape in a thin layer of ferroelectric Hf0.5Zr0.5O2 for the first time and so confirmed that the material indeed has negative capacitance. To do this, they first fabricated capacitors with a thin dielectric layer on top of the ferroelectric. They then applied very short voltage pulses to the electrodes of the capacitor, while measuring both the voltage and the charge on it with an oscilloscope.

“Since we already knew the capacitance of the dielectric layer from separate experiments, we were then able to calculate the polarization and electric field in the ferroelectric layer,” Hoffmann tells Physics World. “We then calculated the double-well energy landscape by integrating the electric field with respect to the polarization.”

Of course, there are plenty of potential applications for something like this.

One of the most promising applications utilising negative capacitance are electronic circuits with much lower power dissipation that could be used to build more energy efficient devices than any that are possible today, he adds. “We are working on making such devices, but it will also be very important to design further experiments to probe the negative capacitance region in the structures we made so far to help improve our understanding of the fundamental physics of ferroelectrics.”

But the most interesting part for me is that, if you look at Fig. 1 of the Nature paper, the double-well structure is something that many of us former and current physics students may have seen. I know that I remember solving this double-well problem in my graduate level QM class. Of course, we were solving it energy-versus-space dimension, instead of the energy-versus-polarization dimension as shown in the figure.

I guess that you don't doubt that that the Academia in the Western countries is leaning to the left. Well, that's a terrible understatement. It's heavily left-wing. A 2009 Pew Research Poll found that among scientists in the U.S. Academia, 55% were registered Democrats, 32% were Independents, and 6% were Republicans. The numbers have probably gotten much worse in the subsequent decade.

As we could conclude e.g. by seeing the 4,000 signatures under a petition penned by D.H. against Alessandro Strumia, out of 40,000 HEP authors that could have signed, the percentage of the hardcore extremist leftists who are willing to support even the most insidious left-wing campaigns is about 10% in particle physics. Assuming that the number 6% above was approximately correct, you can see that the Antifa-type leftists outnumber all Republicans, including the extremely moderate ones and the RINOs (Republicans In Name Only), and a vast majority of those 6% are RINOs or extremely moderate Republicans.

Because the extreme leftists are the only loud subgroup – you know, the silent majority is silent as the name indicates – they shape the atmosphere in the environment to a very unhealthy degree. It has become unhealthy especially because they have managed to basically expel everybody who would be visibly opposing them.

"Diversity" is one of the buzzwords that have become even more influential in the Academia than in the whole U.S. society – and even their influence over the latter is clearly excessive.

In practice, "diversity" is a word meaning to justify racist and sexist policies against whites (and yellows – who are often even more suppressed), against men, and especially against white men. Those are still allowed in the Academia but only if they "admit" that their previous lives and origin are non-existent; that they abhor masculinity and the white race and they deny that the white men have built most of the civilization; and that the whites, men, and white men have only brought misery to the world; and if they promise that they will dedicate their life to the war on the real i.e. evil men, whites, and white men.

The radically left-wing 10% of the university people are really excited about this hostility against the white men – they are as excited as the Nazis were during the Night of Broken Glass (even the pointing out of this analogy could cause trouble to you). The silent majority doesn't care or reacts with some incoherent babbling that seems safe enough to the radical loons which is why a kind of tolerance has evolved in between the radical left and the incoherent silent majority.

These moderate people say "why not", "it can't hurt" etc. when some white/men are forced to spit on their race and sex or when 50% of the females or people of color are hired purely through affirmative action. Sorry, Ladies and Gentlemen, but like Nazism, communism, and all totalitarian political movements based on lies, this system of lies and intimidation is very harmful and existentially threatening for whole sectors of the society and scientific disciplines, too.

We're still waiting for the first female physics Nobel prize winner who would say that she has found some institutionalized diversity efforts helpful – Marie Curie and Maria Meyer haven't been helped at all and Donna Strickland considers herself a scientist, not a woman in science, and is annoyed when her name is being abused by the feminist ideologues.

However, we already have a great example of prominent negative contributions to particle physics. Sabine Hossenfelder released her book, Lost In Math (whose PDF was posted by her or someone else on the Internet and you may find it on Google), and is writing numerous essays to argue that no new collider should ever be built again and particle physics should be suspended and 90% of the physicists should be fired.

For example, two days ago, Nude Socialist published her musings titled Why CERN’s plans for a €20 billion supersized collider are a bad idea whose title says everything you need (at least I believe you have no reason to contribute funds to the socialist porn). Ms Hossenfelder complains about the "eye-watering" €21 billion price of the most ambitious version of the FCC collider. Because she feels lost in math, you will have some suspicion that she chose the eye-catching adjective because she confused billions and trillions. But even if she did, it doesn't matter and she wouldn't change the conclusion because mathematics never plays a decisive role in her arguments.

Therefore, investment-wise, it would make more sense to put particle physics on a pause and reconsider it in, say, 20 years to see whether the situation has changed, either because new technologies have become available or because more concrete predictions for new physics have been made.No šit. Look, we are currently paying for a lot of particle physicists. If we got rid of 90% of those we'd still have more than enough to pass on knowledge to the next generation.I am perfectly aware that there are theorists doing other things and that experimentalists have their own interests and so on. But who cares? You all sit in the same boat, and you know it. You have profited from those theorists' wild predictions that capture the public attention, you have not uttered a word of disagreement, and now you will go down with them.

And she wrote many theses along the same lines. From the beginning when this blog was started in late 2004, I was explaining that the jihad against string theory wasn't specifically targeting string theory. It was just a manifestation of some people's great hostility towards quantitative science, creativity, curiosity, rigor, mental patience, and intellectual excellence – and string theory was just the first target because it's probably the best example of all these qualities that the mankind has.

It seems to me that at least in the case of Sabine Hossenfelder, people see that I was right all along. This movement is just a generic anti-science movement and string theory or supersymmetry were the first targets because they are the scienciest sciences. But the rest of particle physics isn't really substantially different from the most prominent theories in theoretical physics, and neither is dark energy, dark matter, inflationary cosmology, and other things, so they should "go down" with the theories in theoretical physics, Hossenfelder proposes. It makes sense. If you succeeded in abolishing or banning string theory, of course you could abolish or ban things that "captured less public attention", too. It is just like in the story "First they came for the Jews...". And it's not just an analogy, of course. There's quite some overlap because some one-half of string theorists are Jewish while the ideology of the string theory critics was mostly copied from the pamphlets that used to define the Aryan Physics.

Well, as far as I know, Peter Woit and Lee Smolin – the prominent crackpots who hated string theory and played Sabine Hossenfelder's role around 2006 – have never gone far enough to demand the suspension of particle physics for 20 years, dismissal for 90% of particle physicists, and other things. So even people from this general "culture" were surprised by Hossenfelder's pronouncements. For example, Lorenzo wrote:

Sabine, I [have been licking your aß for years] but I fear that recently your campaign against particle physics is starting to go a bit too far. [...]

Well, regardless of Lorenzo's sycophancy as well as paragraphs full of arguments, he was the guy who was later told that he had to "go down" as well, in another quote above. Many others have been ordered to be doomed, too. OK, why was it Ms Sabine Hossenfelder and not e.g. her predecessors Mr Peter Woit and Mr Lee Smolin who finally figured out the "big, simple, ingenious idea" – the plan to demand the death for particle physics as a whole?

The correct answer has two parts that contribute roughly equally, I think. The first part is analogous to Leonard's answer to Penny's implicit question about his night activities:

Penny: Oh, Leonard?Leonard: Hey.Penny: I found these [Dr Elizabeth Plimpton's underwear] in the dryer. I’m assuming they belong to Sheldon.Leonard: Thanks. It’s really hard to find these in his size. So, listen. I’ve been meaning to talk to you about the other morning.Penny: You mean you and Dr. Slutbunny?Leonard: Yeah, I wanted to explain.Penny: Well, you don’t owe me an explanation.Leonard: I don’t?Penny: No, you don’t.Leonard: So you’re not judging me?Penny: Oh, I’m judging you nine ways to Sunday, but you don’t owe me an explanation.Leonard: Nevertheless, I’d like to get one on the record so you can understand why I did what I did.Penny: I’m listening.Leonard: She let me.

OK, why did Leonard have sex with Dr Plimpton? Because she let him. Why does Dr Hossenfelder go "too far" and demands the euthanasia for particle physics? Because they let her – or we let her. Everyone let her. So why not? Mr Woit and Mr Smolin didn't go "this far" because, first, they are really less courageous and less masculine than Ms Hossenfelder; second, because – as members of the politically incorrect sex – they would genuinely face more intense backlash than a woman.

The second part of the answer why the "big plan" was first articulated by Ms Hossenfelder, a woman, and not by a man, like Mr Smolin or Mr Woit, is that it is more likely for a woman to grow hostile towards all of particle physics or any activity within physics that really depends on mathematics.

Mathematics is a man's game. The previous sentence is a slogan that oversimplifies things and a proper interpretation is desirable. The proper interpretation involves statistical distributions. Women are much less likely to feel really comfortable with mathematics and to become really successful in it (they are predicted – and seen – to earn about 1% of the Fields Medals, for example), especially advanced mathematics and mathematics that plays a crucial role, primarily because of the following two reasons:

women's intellectual focus is more social, on nurturing, and less on "things" and mathematics

women's IQ distribution is some 10% narrower which means that their percentage with some extremely high mathematical abilities, and this extreme tail is needed, decreases much more quickly than for men (the narrower distribution means that women are less likely to be really extremely stupid than men, too)

Smolin, Woit, and Hossenfelder are just three individuals so it would be a fallacy to generalize any observations about the three of them to theories about whole sexes. On the other hand, lots of the differences (except for her being more masculine than Woit and Smolin when it comes to courage!) are pretty much totally consistent with the rational expectations based on lots of experience – extreme leftists would say "prejudices" – about the two sexes. Ms Hossenfelder, a woman, just doesn't believe that the arguments and derivations rooted in complex enough mathematics should play a decisive role. She has very unreasonably claimed that even totally technical if not mathematical questions such as the uniqueness of string theory are sociological issues. It's because women want to make things "social". Well, Lee Smolin has also preposterously argued that "science is democracy" but he just didn't go this far in the removal of mathematics from physics.

Just to be sure, I am not saying that Ms Hossenfelder is a typical feminist. She is not. She hasn't been active in any of the far left compaigns. But her relationship towards mathematical methods places her among the typical women. She has been trained as a quantitative thinker which made her knowledge surpass that of the average adult woman, of course, but training cannot really change certain hardwired characteristics. On the other hand, a feminist is someone who believes that it is reasonable for women – and typical women like her – to have close to 50% of influence over physics (or other previously "masculine" activities). So what I am saying is the following: it is not her who is the real feminist in this story – it is the buyers and readers of her book and her apologists. Non-feminists, whether they're men or women, probably find it a matter of common sense that her book might be poetry or journalism but her opinions about physics itself just cannot be considered on par with those of the top men.

In the text above, we could see one mechanism by which the diversity efforts hurt particle physics. They made it harder to criticize a woman, in this case Ms Sabine Hossenfelder, for her extremely unfriendly pronouncements about science and for the bogus arguments on which she builds. Because the simply pointing out that Hossenfelder is just full of šit would amount to a "mansplaining" and the diversity efforts classify these things as politically incorrect, the political correctness has helped to turn Ms Sabine Hossenfelder into an antibiotics-resistant germ.

The second mechanism by which the diversity efforts have caused this Hossenfelder mess is hiding in the answer to the question: Why did Ms Hossenfelder and not another woman start this movement to terminate particle physics?

Well, it's because she is really angry. And she got really angry about it because she has been more or less forced – for more than 15 years – to do things she naturally mistrusts, according to her own opinions. Indeed, my anger is the fair counterpart of that, as explained e.g. in the LHC presentation by Sheldon Cooper. ;-) She had to write papers about the search for black holes at the LHC, theories with a minimal length, and lots of other things – usually with some male collaborators in Frankfurt, Santa Barbara, and elsewhere. But based on her actual experience and abilities, she just doesn't "believe" that anything in particle physics has any sense or value.

This admission has been written repeatedly. For example, a conversation with Nima Arkani-Hamed in her book ends up with:

On the plane back to Frankfurt, bereft of Nima’s enthusiasm, I understand why he has become so influential. In contrast to me, he believes in what he does.

Right, Nima's energy trumps that of the FCC collider and he surely looks like he believes what he does. But why would Ms Hossenfelder not trust what she is doing? And if she didn't trust what she was doing, why wasn't she doing something else? Why hasn't she left the damn theoretical or particle physics? Isn't it really a matter of scientific integrity that a scientist should only do things that she believes in?

And that's how we arrive to the second mechanism by which the diversity ideology has "caused" the calls to terminate particle physics. The diversity politics is pushing (some of the) people into places where they don't belong. We usually talk about the problems that it causes to the places. But this "misallocation of the people" causes problems to the people, too. People are being thrown into prisons. Sabine Hossenfelder was largely thrown to particle physics and related fields and for years, it was assumed that she had to be grateful for that, the system needed her to improve the diversity ratios, and no one needed to ask her about anything.

But she has suffered. She has suffered for more than 15 years. To get an idea, imagine that you need to deal with lipsticks on your face every fudging day for 15 years, otherwise you're in trouble. She really hates all of you and she wants to get you. So Dr Stöcker, Giddings, and others, be extraordinarily careful about vampires at night because the first vampire could be your last.

The politically correct movement has first forced Sabine Hossenfelder to superficially work in theoretical high energy physics (a field whose purpose is to use mathematical tools and arguments to suggest and analyze possible answers to open questions about the Universe) and although she never wanted to trust mathematics this much, she never wanted to derive consequences of many theories although at most "one theory" is relevant in the "real life", and she certainly didn't want to continue with this activity that depends on the trust in mathematics after the first failed predictions (genuine science is all about falsification of models, theories, and paradigms and failed predictions are everyday appearances; scientists must continue, otherwise science would have stopped within minutes the first time it was tried in a cave; in the absence of paradigm-shifting experiments, it's obvious that theorists are ahead and they are comparing a greater number of possibly paths to go further than 50 years ago – not everyone would like it but it's common sense why the theoretical work has to be more extensive in this sense).

And the PC ideology has kept her in this prison for more than 15 years.

And then, when she already emerged as an overt hater of particle physics, the same diversity ideology is turning her into an antibiotics-resistant germ because it's harder to point out that she isn't really good enough to be taken seriously when it comes to such fundamental questions. And it seems that some people don't care and in their efforts to make women stronger, they want to help her to sell her book – it seems that the risk that they are helping to kill particle physics seems to be a non-problem for them.

These are the two mechanisms by which the politically correct ideology threatens the very survival of scientific disciplines such as particle physics. So all the cowards in the HEP silent majority, you're just wrong. PC isn't an innocent cherry on a pie. PC is a life-threatening tumor and the cure should better start before it's too late.

January 19, 2019

““This is absolutely the stupidest thing ever,” said Antar Davis, 23, a former zookeeper who showed up in the elephant house on Friday to take one last look at Maharani, a 9,100-pound Asian elephant, before the zoo closed.” The New York Times, Dec 29, 2018

“The Trump administration has stopped cooperating with UN investigators over potential human rights violations occurring inside America [and] ceased to respond to official complaints from UN special rapporteurs, the network of independent experts who act as global watchdogs on fundamental issues such as poverty, migration, freedom of expression and justice.” The Guardian, Jan 4, 2019

“I know more about drones than anybody,” he said (…) Mr. Trump took the low number [of a 16% approval in Europe] as a measure of how well he is doing in the United States. “If I were popular in Europe, I wouldn’t be doing my job.”” The New York Times, Jan 3, 2019

““Any deaths of children or others at the border are strictly the fault of the Democrats and their pathetic immigration policies that allow people to make the long trek thinking they can enter our country illegally.” The New York Times, Dec 30, 2018

This Monday, 21st January 2019, is the centenary of a momentous day in Irish history. On 21st January 1919 the first Dáil Éireann met and issued a Declaration of Irish Independence and so the War of Irish Independence began..

This post from Maynooth Library describes fascinating archived material relating to Domhnall Ua Bramhall, who was elected to the First Dáil for Kildare North (which includes Maynooth).

‘May God send in every generation men who
live only for the Ideal of Ireland A Nation’ James Mallon B. Co. III Batt.
I.R.A. Hairdresser “To the boy of
Frongoch” with E. D’Valera Easter Week 22/12/16 Frongoch’.

MU/PP26/2/1/7 Autograph by James Mallon

Members of the first Dáil 1919

On the 21st of January 1919, the first meeting of Dáil Éireann took place in the Mansion House, Dublin. Elected in the 1918 General Election, the members of parliament refused to take up their seats in Westminster, and instead established the Dáil as a first step in achieving the Irish Republic.

January 18, 2019

The partial government shutdown that shuttered NASA continues with no end in sight. The U.S. space program sits idle, the vast majority of its workforce sent home. Space science and exploration projects are disrupted. Paychecks are absent. And an unsettling realization has dawned on hundreds of thousands of public employees and contractors affected by the shutdown: this time is different.

Before going home for the weekend I thought I’d share this work (300cm × 120cm, marker on whiteboard) by a relatively unknown Anglo-Irish artist currently based in the Maynooth area who wishes to remain anonymous.

Despite the somewhat stochastic form of the composition and the unusual choice of medium I think this work speaks for itself, but I’d just like to comment* that, with regard to the issue of content, the disjunctive perturbation of the spatial relationships resonates within the distinctive formal juxtapositions. Moreover, the aura of the figurative-narrative line-space matrix threatens to penetrate a participation in the critical dialogue of the 90s. Note also that the iconicity of the purity of line brings within the realm of discourse the remarkable and unconventional handling of light. As a final remark I’ll point out that the presence of a bottle of whiteboard cleaner to the bottom right of the work symbolizes the ephemeral nature of art and in so doing causes the viewer to reflect on the transience of human existence.

It was back to college this week, a welcome change after some intense research over the hols. I like the start of the second semester, there’s always a great atmosphere around the college with the students back and the restaurants, shops and canteens back open. The students seem in good form too, no doubt enjoying a fresh start with a new set of modules (also, they haven’t received the results of last term’s exams yet!).

This semester, I will teach my usual introductory module on the atomic hypothesis and early particle physics to second-years. Yet again, I’m fascinated by the way the concept of the atom emerged from different roots: from philosophical considerations in ancient Greece to considerations of chemistry in the 18th century, from the study of chemical reactions in the 19th century to considerations of statistical mechanics around the turn of the century. Not to mention a brilliant young patent clerk who became obsessed with the idea of showing that atoms really exist, culminating in his famous paper on Brownian motion. But did you know that Einstein suggested at least three different ways of measuring Avogadro’s constant? And each method contributed significantly to establishing the reality of atoms.

In 1908, the French physicist Jean Perrin demonstrated that the motion of particles suspended in a liquid behaved as predicted by Einstein’s formula, derived from considerations of statistical mechanics, giving strong support for the atomic hypothesis.

One change this semester is that I will also be involved in delivering a new module, Introduction to Modern Physics, to first-years. The first quantum revolution, the second quantum revolution, some relativity, some cosmology and all that. Yet more prep of course, but ideal for anyone with an interest in the history of 20th century science. How many academics get to teach interesting courses like this? At conferences, I often tell colleagues that my historical research comes from my teaching, but few believe me!

Backreaction, currently the most influential forum of haters of physics in the world, has reacted to the newly completed design plans for the next future collider at CERN, the FCC. You may find the article with the not very friendly but very populist title Particle physicists want money for bigger collider through a search engine.

In that article, we learn that even the lowest estimate €9 billion is too much money and it's not worth spending. Such appraisals obviously depend on one's priorities but a person who finds €9 billion as way too much is clearly an anti-science savage. It's just a small fraction of the capitalization of Tesla, a car company making just some 200,000 cars a year (0.3% of the global annual production of 70 million cars a year) and still waiting for its first annual profit. Or 1/20 of Apple's cash reserves.

What would you think about the global population that isn't willing to pay this much money for the most fundamental scientific experiment once a decade or two?

The first thing I find shockingly crazy in Hossenfelder's rant is her talk about "them, the evil particle physicists". Isn't she a particle physicist herself, with the most cited articles of the type "the minimal length in QFT or quantum gravity" or "phenomenology of quantum gravity"? Well, that's a good question – and a subtle one.

If you count particle physics crackpots as practitioners of particle physics, she is a particle physicist, and if you don't, then she is not a particle physicist. Talking about the minimal length in the theoretical frameworks that we use to describe the fundamental laws of Nature – and talking about phenomenology of the theories associated with the shortest distances we consider in science – is theoretical particle physics and the only reason you could find to say that it is not particle physics is that all her papers are rubbish.

But things get even more extreme in the discussion under her main rant. Commenter Frederic Dreyer sort of disagrees with the defunding and she responds:

Dreyer: "...most of the particle physics community would disagree with this statement..."

Hossenfelder: No shit. Look, we are currently paying for a lot of particle physicists. If we got rid of 90% of those we'd still have more than enough to pass on knowledge to the next generation. [...]

The chutzpah of this crackpot is just incredible. Not only she doesn't admit that she is still being paid for pretending to be a particle physicist of a sort (which should hopefully end in a few months). She claims that "we are paying". How did you become an important member or even a spokeswoman of that "we", Ms Hossenfelder? How much have you paid to those evil particle physicists, Ms Hossenfelder?

OK, she proposes to fire 90% of particle physicists, practitioners in the most fundamental discipline in pure science. Nice. Let's imagine some hardcore politicians get the power and start the process.

First, there will be questions such as: Who is fired? And who stays? Who is the lucky 10%? One is supposed to keep the best folks. But how does the system determine that? On top of that, many people have some faculty positions so they're also teaching. Isn't it obvious that the universities will still need these instructors? Some of them teach not just particle physics but also more general subjects. Won't the universities be forced to simply reclassify these people as pure instructors?

Now, how could this be helpful? Is an instructor who doesn't do any scientific research a better one? I don't think so. Teaching is clearly a derivative occupation while the research is the real story that the students are really being trained for. Research is what actually gives the authority to the people. Similarly, things simply don't get better if you force a competent physicist to only teach Classical Mechanics instead of Classical Mechanics and Anomalies in Quantum Field Theory. If the system prevented the capable people from teaching stuff like QFT, the correct explanation would clearly be that QFT has been labeled a heresy. QFT takes time and brains to be learned and younger folks want to learn it – so what could be the justification of such a ban other than the medieval laws against a heresy?

Great. I think it's obvious that she would answer that the instructors of courses related to particle physics would be mostly eradicated and she would find some better replacements wherever needed – whose advantage is that they can only teach Classical Mechanics. Let me warn you: Hossenfelder's procedure can't be called "decimation of particle physics". Decimation means that 10% of soldiers are shot dead and 90% soldiers survive. She wants to do it in the other way around! ;-)

OK, how will physics look after the anti-decimation wet dream of hers?

It's clear that most research groups that have some particle physics will be shut down entirely. Why? You can't really reduce the numbers by 90% uniformly because the individual places would have too few people who are doing these things. The rare leftovers would be getting stuck all the time, would be incapable of attracting students, there couldn't be meaningful seminars there because the number of people would be too low, and so on.

The anti-decimation would therefore be closer to the shutdown of 90% of research groups while some 10% would survive almost in the present numbers. Countries like Czechia (1/700 of the world population but 1/300 of some GDP-like importance) would have no condensation core for particle physics – people in such medium-size nations just couldn't think about meeting several people speaking the same language who understand at least the graduate textbook stuff.

How many people would stay? Alessandro Strumia has used a database with 40,000 authors of particle physics papers. It seems reasonable to me to hypothesize that the actual number of actively paid professionals is lower at every point, perhaps 20,000. Who is exactly counted is a fuzzy problem for many reasons. Hossenfelder's plan is to reduce those to 2,000. Two fudging thousand people on a planet with over 7 billion people. One would need over 3.5 million people to expect one particle physicist in that group. Just tomorrow, just Tesla will fire more, namely 3,000 employees.

Someone might think that 2,000 is still a lot of people for particle physics. But only someone who doesn't really understand what kind of stuff and subdisciplines exist in particle physics – someone whose resolution is absolutely hopeless and who just doesn't see the structures inside – could think that 2,000 would be enough for the field to continue in a comparably meaningful sense. Why would there be a problem? You know, a complete layman may imagine that a good physicist was Einstein, which was 1 man, and it's about the right number.

A smarter layman could recall that it was once said that general relativity was only understood by 12 physicists in the world, so maybe 12 is enough as the number of particle physicists, too, although the purpose of the number has always been to impress the listeners by its low value.

Let's not be overly ludicrous and let's ask: How would the papers written by that anti-decimated HEP community look like? We will assume that the same anti-decimation would apply to string theory and quantum gravity as well. Whether they would be counted as particle physics just couldn't be important. They're too close in spirit. I guess that the anti-decimators would prefer the reduction to be even harsher than 90% in those fields. It's helpful to pick an example of a influential paper from the recent decade so that it defines a whole subdiscipline.

I somewhat randomly pick the "entanglement is geometry/wormhole" minirevolution. The Maldacena-Susskind paper on the ER-EPR correspondence is being cited by approximately 10 other papers a month – the rate is very close to constant between 2013 and 2018. After Hossenfelder's anti-decimation, it seems obvious to me that this number would drop roughly by 90%, too. You could hope that it's a research direction of a higher quality so it would be more likely to survive. But it just couldn't work in this way.

Well, let's see: I find 90% of papers "not very important" but the selection just couldn't possibly be such that this subset would greatly overlap with the disappeared research. Research projects of all kinds would suffer comparably. Some papers are more intriguing to all readers – or all smart readers – partly because they're very good papers; and partly because the reader's (or my) interests don't reach all kinds of research. But the quality cannot be uniform so aside from very good papers, there always exist papers that are not very good. You can't change the fact that the distribution always has a width.

For these reasons, we are talking about the world where one paper is written each month that has something to do with ER=EPR – despite the fact that it's one of the most important new topics. There are roughly 12 such papers in a year. They have 12 or so authors – papers usually have more than 1 author but some authors will be repeated. It's the same 12 apostles that "understood GR" in the witticism whose purpose was to claim that the understanding of GR is extremely rare on Planet Earth. You would basically get to this point with topics like ER=EPR.

With such anti-decimated numbers, lots of the things would be below the critical mass. The feedback to the people's papers would be too scarce and slow. Conferences on any topics finer than e.g. string theory would become impossible to organize. Even the string conference would be visited by some 50 people only. Those could be close to a random 10% subset of the current participants. How are you supposed to preserve any cohesion in such a group if the number of mutually distinguishable research subdirections is probably higher than 50?

You cannot. The anti-decimation would mean to kill most of the subdisciplines as well – when the population of a species gets under some critical mass, it's likely to go extinct soon. The mankind would stop doing these things. The smartest kids who would be born in 2020 and who would have access to the libraries in 2035 (Hossenfelder generously suggested that she doesn't plan to burn the libraries so far) would be shocked what kind of incredible things the people could have done as recently as 2018 or so, before things started to collapse in 2019 or so. ;-) The old material would become as impenetrable to the future people as some ancient Greek if not Babylonian texts to the modern world – because once the world loses the controllable network of teachers, students, tests, and peer review, all the knowledge will become at most amateurish. We may hope we're not missing anything important that was included in the ancient texts – but they clearly would.

In another sentence, Hossenfelder proposes to put particle physics on the back burner for 20 years. Particle physics is a living organism, like yogurt. Have you ever tried to put yogurt on the back burner for 20 years?

Even if we neglect the topics that would disappear completely, the rate of the progress in particle physics would slow down roughly by 90%, too. On one hand, the decrease could be less brutal because one would optimistically fire the "worse" people in average. But the survivors would have a worse intellectual infrastructure of the colleagues so even their personal rate of progress would probably slow down. If we have several findings at the level of the Higgs boson and ER=EPR per decade, we would have several advances like that per century – or per lifetime, if you wish. Why would someone who has pretended to be a theoretical physicist for 15 years want such a change to the society? Why can't the world pay the 0.01% of the annual global GDP – once in a decade or two (so the expenses are really 0.001% of the global GDP over this longer period) – to build a new cutting-edge collider? And a similar amount to the non-collider related expenses powering the field?

Do any independent people with a brain really think that saving of 0.001% of the global GDP justifies the global eradication of the most fundamental scientific discipline? What is driving hateful lunatics like Ms Hossenfelder? Does she want to stop with particle physics or is it just a beginning of the plan to eradicate all human activities where she realizes her inadequacy? After particle physics is banned in this way, why would the mankind keep condensed matter physics? Astrophysics? Nuclear and molecular physics? Aren't those just some inferior versions of something that has been found useless as well? And then quantum mechanics, isn't it a theory without applications (because those have been banned)? Isn't really algebra, calculus, or all of mathematics a useless anachronism? Physics in general? Schools? Writing and reading?

Simeon and others, don't you realize that by your endorsement of the feminist craze and fascist petitions such as one by the dickhead D.H., you are helping to make people like Hossenfelder incredibly politically strong because lots of people similar to you (and maybe including you) are afraid of criticizing such crackpots of a privileged sex? What's wrong with so many of you? Hossenfelder's plan to reduce particle physics by 90% or suspend it for 20 years is what your celebrated "diversity" means in the real world. If you are pushing people to do something that they naturally hate, they will dream about destroying it.

It has been a little while since I wrote here and not since last month when it was also last year, so let's break that stretch. It was not a stretch of entire quiet, as those of you who follow on social media know (twitter, instagram, Facebook... see the sidebar for links), but I do know some of you don't directly on social media, so I apologise for the neglect.

The fact is that I've been rather swamped with several things, including various duties that were time consuming. Many of them I can't talk about, since they are not for public consumption (this ranges from being a science advisor on various things - some of which will be coming at you later in the year, to research projects that I'd rather not talk about yet, to sitting on various committees doing the service work that most academics do that helps the whole enterprise keep afloat). The most time-consuming of the ones I can talk about is probably being on the search committee for an astrophysics job for which we have an opening here at USC. This is exciting since it means that we'll have a new colleague soon, doing exciting things in one of a variety of exciting areas in astrophysics. Which area still is to be determined, since we've to finish the search yet. But it did involve reading through a very large number of applications (CVs, cover letters, statements of research plans, teaching philosophies, letters of recommendation, etc), and meeting several times with colleagues to narrow things down to a (remarkable) short list... then hosting visitors/interviewees, arrangement meetings, and so forth. It is rather draining, while at the same time being very exciting since it marks a new beginning! It has been a while since we hired in this area in the department, and there's optimism that this marks a beginning of a re-invigoration for certain research areas here.

Physics research projects have been on my mind a lot, of course. I remain very excited abut the results that I reported on in a post back in June, and I've been working on new ways of building on them. (Actually, I did already do a followup paper that I did not write about here. For those who are interested, it is a whole new way of defining a new generalisation of something called the Rényi entropy, that may be of interest to people in many fields, from quantum information to string theory. I ought to do a post, since it is a rather nice construction that could be useful in ways I've not thought of!) I've been doing some new explorations of how to exploit the central results in useful ways: Finding a direct link between the Second Law of Thermodynamics and properties of RG flow in quantum field theory ought to have several consequences beyond the key one I spelled out in the paper with Rosso (that Zamolodchikov's C-theorem follows). Im particular, I want to sharpen it even further in terms of something following from heat engine constraints, as I've been aiming to do for a while. (See the post for links to earlier posts about the 'holographic heat engines" and their role.)

You might be wondering how the garden is doing, since that's something I post about here from time to time. Well, right now there is an on-going deluge of rain (third day in a row) that is a pleasure to see. The photo at the top of the page is one I took a few days ago when the sky was threatening the downpours we're seeing now. The rain and the low temperatures for a while will certainly help to renew and refresh things out there for the (early) Spring planting I'll do soon. There'll be fewer bugs and bug eggs that will [...] Click to continue reading this post →

January 17, 2019

Today, there was news about a huge database containing 773 million email address / password pairs became public. On Have I Been Pawned you can check if any of your email addresses is in this database (or any similar one). I bet it is (mine are).

These lists are very probably the source for the spam emails that have been around for a number of months where the spammer claims they broke into your account and tries to prove it by telling you your password. Hopefully, this is only a years old LinkedIn password that you have changed aeons ago.

To make sure, you actually want to search not for your email but for your password. But of course, you don't want to tell anybody your password. To this end, I have written a small perl script that checks for your password without telling anybody by doing a calculation locally on your computer. You can find it on GitHub.

Slava Linkin, one of the leading planetary scientists in the Soviet Union and later Russia, passed away on 16 January 2019. Viachelslav Mikhailovich Linkin was an enormously important participant in Planetary Society history.

Chad Orzel has posted a fun piece that really tries to clarified all the brouhaha in many circles about a "crisis" that many are presuming to be widespread. The crisis in question is the lack of "beyond the standard model" discovery in elementary particle physics, and the issue that many elementary particle theorists seem to think that a theory that is based on solid foundation and elegance are sufficient to be taken seriously.

I find this very frustrating, because physics as a whole is not in crisis. The "crisis" being described is real, but it affects only the subset of physics that deals with fundamental particles and fields, particularly on the theory side. (Experimental physicists in those areas aren't making dramatic discoveries, but they are generating data and pushing their experiments forward, so they're a little happier than their theoretical colleagues...)

The problems of theoretical high energy physics, though, do not greatly afflict physicists working in much of the rest of the discipline. While this might be a time of crisis for particle theorists, it's arguably never been a better time to be a physicist in most of the rest of the field. There are exciting discoveries being made, and new technologies pushing the frontiers of physics forward in a wide range of subfields.

This is a common frustration, because elementary particle physics is not even the biggest subfield of physics (condensed matter physics is), but yet, it makes a lot of noise, and the media+public seem to pay more attention to such noises. So whenever something rocks this field, people often tend to think that this permeates through the entire field of physics. This is utterly false!

Orzel has listed several outstanding and amazing discoveries and advancements in condensed matter. There are more! The study of topological insulators continues to be extremely hot and appear to be not only interesting for application, but also as a "playground" for exotic quantum field theory scenarios.

I've said it many times, and I'll say it again. Physics isn't just the Higgs or the LHC. It is also your iphone, your MRI, your WiFi, your CT scan, etc....etc.

Those people who aren't quite satisfied with a 75-second-long popular video about the Future Circular Collider (FCC) at CERN – a video with some usual nice pictures saying that the experiment wants to study particle physics and the Universe – have the opportunity to look at a somewhat more detailed study.

Today, the FCC Collaboration has submitted their paper to the European Journal of Physics:

Here you have a 2-minute FCC video starting with the documentary proving that the Earth is flat.

If you add the pages of the four papers submitted to EPJ (one to EPJ C and three to EPJ ST; lead authors are Benedikt, Zimmermann, and Mangano), you will get 1,244 pages of technical documentation (or, if you add 81 pages in the four updates, 1,325 pages). It would be a lot of pages for a small group of authors. However, each list of authors' names occupies some 5 pages and additional 10 pages are dedicated to the list of institutions.

The new collider should be a greater version of the LHC. Instead of a 27-kilometer tunnel, there should be a 100-kilometer tunnel. But just like the LHC, it should first host a lepton (electron-positron) collider whose adjustable center-of-mass energy is just enough to produce either W-boson pairs; or top-quark pairs; or \(HZ\) pairs of the Higgs and the Z-boson (thanks Tristan again, it's not the first time I made a similar mistake).

In a later stage, the same 100-kilometer-long tunnel would host the proton-proton collider analogous to the LHC. But the total center-of-mass energy wouldn't be \(13\TeV\) as it recently was at the LHC. Instead, it would be \(100\TeV\). The Standard Model can still work well over there. But it can break below that energy, too. There exist various reasons why lots of particles – such as superpartners in some M-theory compactifications and/or more general models justified by certain intriguing cosmological criteria – should be below that energy.

Maybe the 100-kilometer-long circular tunnel could even be created so that Elon Musk could drive his Tesla there for a while. ;-)

Many of us feel that the LHC has already probed enough at the energy of a few \({\rm TeV}\) and it's rather likely that nothing new beyond the Higgs boson is gonna be found there. But with the higher \(100\TeV\) energy, the game would start with the full excitement, of course.

I am dedicating a special blog post to the four papers sent to EPJ in order to make sure that everyone who is seriously interested – e.g. everyone who would like to promote his or her opinion – can see what the two-stage project is actually supposed to be and what physical effects, known or hypothetical ones, may be tested with such a device.

All these design reports are available not only to the particle physicists or all physicists or all scientists but to the general public – in Europe and beyond.

Your humble correspondent is not going to read those 1,244 pages and I think that the number of people who have read them or who will read all of them is or will be extremely tiny. But I have looked at many pages and I think that everyone who wants his or her opinion to be treated seriously should refer to plans in these four papers rather than to deliberately oversimplified formulations in a rather irrelevant 75-second-long popular video addressed to the complete laymen.

The people who can only discuss the popular video must be consider complete laymen and it's my belief that the influence of such people over the multi-billion decisions about the future European particle physics projects should be minimal. All people's opinions – and taxpayers' opinions – matter but science is a meritocracy, not democracy, and it's obvious that some people's opinions must matter much less than other, more well-informed people's opinions.

Two months ago, Lisa Randall was in China and gave a wonderful interview over there to a female journalist who was much more prepared than the typical Western journalists. She praised China and the chance that China will fund some big projects. It's great but due to risks to the freedom and democracy of the physicists working for the Chinese projects, I would still prefer another big collider in the good old Europe – which will hopefully remain somewhat more free and democratic than China at least for a few more decades.

This conceptual design report came out today. It looks like an impressive amount of work and although I am familiar with some of its contents, it will take time to digest, and I will undoubtedly be writing more about it … Continue reading →

At the dawn of the new year, the Staff Association sends you and your loved ones our best wishes of good health, happiness and success! May this New Year be filled with personal and professional satisfaction.

The Committee week at the end of 2018

The CERN Finance Committee and Council met from 12 to 14 December 2018.

The main decisions that will directly or indirectly affect the financial and social conditions of the staff are:

In 2019 the indexation of basic salaries and allowances is set to 1.05%, subsistence allowances at 0.68%, and the indexation of the material budget is set to 2.64%. The Staff Association was pleased to note that the indexation procedure has been strictly applied.

The Management's proposal to start the next five-yearly review in 2020 and to conclude it with a decision of Council in 2021, i.e. with a one-year shift, was unanimously approved. While the Staff Association is strongly committed to ensure that the five-yearly review process is effectively carried out on a regular basis every five years, it understands and accepts the postponement of the five-yearly review schedule, proposed by the Management, as being in the interest of the Organization, provided that this postponement remains exceptional.

2019: Preparation of the next five-yearly review

The next five-yearly review of the conditions of employment will begin very early in 2020. The Staff Association will start preparing for this already in 2019. This is a most important subject and we will need your help!

2019 will also see another important milestone, namely the triennial actuarial review of the Pension Fund, the results of which will be presented to the Finance Committee and the Council in June.

In this context, the Staff Association reaffirms its willingness to work in an atmosphere of a solid, calm and fruitful concertation with the Management.

Contact us and let’s talk!

We would also like remind you at the beginning of this year that the Staff Association represents and defends all members of personnel, both employed (MPEs) and associated (MPAs), in its discussions with the Management and Member States.

Therefore, do not hesitate to contact your staff delegate to enrich the debate by sharing your views on the topics we table or on issues that concern you.

Your support and fees are essential for us to represent you in the best possible way! Support the Staff Association through your membership and commitment.

The Staff Association will renew its Staff Council at the end of 2019; why not run for election to become a Staff Delegate?

and she likes to give them the same answer as the rubbish on her atrociously anti-scientific blog: almost none of the research makes any sense, problems aren't problems, physics should stop. Dear students, let me point out that

she is just an average incompetent layman who was allowed to pretend to be a physics researcher just because she has a non-convex reproductive organ and that's why, in the contemporary environment of the extreme political correctness, many people are afraid to point out the obvious fact that she is just an arrogant whining woman who has no clue about any of these physics problems.

But every actual physicist will agree with an overwhelming majority of what I am going to say.

If you misunderstand the basic issues of the current state of theoretical physics (and particle physics and cosmology) as much as she does, you simply cannot get a postdoc job in any research group, with a possible exception of the totally ludicrous low-quality or corrupt places. And most of the relevant people would probably agree with me that if you still haven't figured out why her musings are completely wrong, you shouldn't really be a physics graduate student, either.

She starts by saying that advances in physics may be initiated by theorists or by experimenters – so far so good – but she quickly gets to a list of 12 defining problems of modern theoretical physics (and/or particle physics and cosmology) and she says that almost none of them is really a good problem deserving research. Most of them are problems with some apparent fine-tuning similar to the hierarchy problem.

Let's discuss them one by one. We begin with the problems that are not problems of the fine-tuning type:

Dark matter: S.H. thinks it's an inconsistency between observations and theory. For this reason, it's a good problem but it's not clear what it means to solve it.

Dark matter isn't an inconsistency between observations and theory. It's just an inconsistency between observations and a theory supplemented with some extra assumption and it is a theoretically unmotivated assumption, namely that our telescopes are capable of seeing all sources of the gravitational field.

This assumption shouldn't be called "a theory" – and not even "a hypothesis" – because there's no coherent framework for such "a theory". The assumption is ad hoc, doesn't follow from any deeper principles, doesn't come with any nontrivial equations, and doesn't imply any consequences that would be "good news" i.e. that would have some independent reasons to be trusted.

In principle, there are two possible classes how to deal with the galactic rotation curves that disagree with the simplest assumption: Either general relativity is subtly wrong (that's the MOND theories), or there are extra masses that source the gravitational field (dark matter). There are good reasons why physicists generally find the second answer to be more likely – general relativity is nice and its deformations seem pathological, while there's nothing wrong about the theories in which some matter doesn't interact through the electromagnetic fields.

To solve the problem of "dark matter" usually means to understand the microscopic and non-gravitational properties of this new stuff.

Grand unification: There's no reason to expect any unification because 3 forces are just fine. Maybe the value of the Weinberg angle is suggestive, she generously acknowledges, and it may or may not have an explanation.

Three non-gravitational forces may co-exist at the level of effective quantum field theory but it's a fact that at the fundamental level, forces cannot be separated. In string/M-theory, all forces and their messengers ultimately arise as different states of the same underlying objects (e.g. the vibrating string in perturbative string theory). Even if you decided that string/M-theory isn't the right description of the Universe, whatever would replace it would probably share the same qualitative properties.

Grand unification is the oldest scenario how the three forces arise from the fundamental theory: they merge into one force even at the level of effective quantum field theory. Grand unified theories may arise as limits of string compactifications. But string/M-theory doesn't make grand unification mandatory. The three forces, more precisely the three factors of the \(U(1)\times SU(2)\times SU(3)\) gauge group, may look separate in every field theory approximation of the physics. That's the case in the braneworlds, for example. But even in such vacua with separate forces, the three forces are fundamentally unified – for example, the branes where the forces live influence each other in the extra dimensions and the stabilization needs to involve all of them.

Quantum gravity: S.H. thinks that QG cures an inconsistency and is a solution to a good problem. But there may be other solutions than "quantizing gravity".

First, there is no inconsistency between gravity and quantum mechanics. It is hard to reconcile these two key principles of physics because in combination, they're even more constraining than separately, but it is not impossible. String/M-theory is at least an example – a proof of the existence of at least one consistent theory that obeys the postulates of quantum mechanics and also includes Einstein-like gravitational force based on the curved spacetime. So the claim that there's an inconsistency is just wrong. There is only an inconsistency between quantum mechanics and the most naive way how to make Einstein's equations quantum mechanical. It is the direct "quantization of gravity" that is inconsistent (at least non-renormalizable)!

Instead, the right picture is a theory that exactly obeys the postulates of quantum mechanics while it only includes Einstein's equations as an approximation at long distances and in the classical limit. So everything that S.H. writes is upside down. "Quantization of gravity" is the inconsistent approach while the consistent "quantum gravity" is something else. And the statement that there are other ways to achieve the consistency also seems to be wrong – all the evidence indicates that there is only one consistent theory of quantum gravity in \(d\geq 4\), string/M-theory. It may have many descriptions and definitions as well as many solutions or vacua but all of them are related by dualities or dynamical processes.

Black hole information problem: A good problem in principle but S.H. thinks that it is not a "promising research direction" because there's no way to experimentally distinguish between the solutions.

The fact that black hole thermodynamics and especially "statistical physics" of black hole microstates would be inaccessible to experiments has been known from the beginning when these words were first combined. It didn't mean that it wasn't a promising research direction. Instead, it's been demonstrated that it was an immensely successful research direction. Ms Hossenfelder knows absolutely nothing about it – although she wrote (totally wrong) papers claiming to be dedicated to the issue – but this changes nothing whatever about the success of this subdiscipline of science.

The laymen may know the name of the recently deceased Stephen Hawking. A big part if not most of his well-deserved scientific fame boils down to the quantum mechanics of black holes and the black hole information paradox.

Lots of questions were answered by purely theoretical or mathematical methods. It's possible. And the consistency constraints are so stringent that the black hole information is more or less a partially unsolved yet very well-defined problem of the mathematical character. The best theoretical physicists of the world have surely spent some time with this theorists' puzzle par excellence.

Misunderstandings of quantum field theory: S.H. believes that the Landau pole and the infrared behavior of QFT isn't as understood as advertised years ago.

This is pretty much complete garbage as well. She doesn't understand these things but that doesn't mean that genuine physicists don't understand them. The infrared behavior of QFTs has been mastered in principle and it's being studied separately for each QFT or a class of QFTs. There is no real obstacle – just new theories obviously sometimes produce new infrared or ultraviolet questions that take some time to be answered. In the same way, the Landau pole is known to be a non-perturbative inconsistency unless the theory is UV-completed in some way and physicists have a good idea which theories have the Landau pole and which don't, which theories can be completed and which can't.

Like in most cases, Ms Hossenfelder just admits that she has no idea about these physics issues and she wants her brain-dead readers to believe that her ignorance implies that genuine physicists are also ignorant. But this implication never works. She doesn't know anything about the existence of the Landau pole or about the infrared problems in given theories – but genuine physicists know lots about these things and others that she constantly spits upon.

The measurement problem: According to her, it's not a philosophical problem but "an actual inconsistency" because the measurement is inconsistent with reductionism.

As explained in a hundred of TRF blog posts or so, there is absolutely nothing inconsistent and absolutely nothing incomplete about the measurement in quantum mechanics. A measurement is a well-defined prerequisite needed to apply quantum mechanics and it requires the observer who identifies himself or the degrees of freedom whose values may be perceived by him. The rest of the Universe is described by the Hilbert space. There is no violation of reductionism here. Reductionism means that the behavior of the observed physical systems may be reduced to the behavior of their building blocks. From his own viewpoint, the observer isn't a part of the observed physical system so it is perfectly legitimate not to decompose him to the building blocks. Instead, it's the very purpose of the term "observer" that it's a final irreducible entity that shouldn't be reduced to anything more fundamental because he is one of the fundamental entities whose existence must be postulated and guaranteed before the theory may be applied.

All the research claiming that the measurement problem is a real problem, paradox, or inconsistency is a worthless pseudoscience that has produced zero scientifically valuable outcomes and there is no reason to think that something will ever change about it.

These are the problems of the fine-tuning type. Hossenfelder says that none of these are problems and they don't deserve any research because all the numbers may be fine-tuned and there is no inconsistency about it.

She just proves she is a moron who completely misunderstands the scientific method. None of these examples of fine-tuning represents a true logical inconsistency but virtually no inconsistencies in natural sciences may ever be truly logical. Instead, natural sciences deal with observations and certain observations look unlikely. Inconsistencies always arise when the observed phenomena are predicted to be extremely unlikely according to the current theory, framework, model, or paradigm.

For example, the Standard Model without a light Higgs boson wasn't "logically" inconsistent with observations. Instead, the LHC observed some excess of diphoton and other final states. This excess could have been considered to be a coincidence. But when the coincidence becomes too unlikely – like "1 in 1 million" unlikely (the 5-sigma evidence) – physicists may announce that the null hypothesis has been disproven and a new phenomenon has been found.

This is how it always works. Scientists always need to have some rules that say which observations should be more likely and which less likely, and when observations that are predicted to be insanely unlikely emerge in an experiment nevertheless, the null hypothesis is falsified.

When it comes to the parameters above – the vacuum energy (in Planck units), the Higgs mass (in Planck units), the relative deviation of the Universe from a flat one, the low concentration of the magnetic monopoles, the large baryon-antibaryon asymmetry in the present Universe, and the small variations in the cosmic microwave background temperature – all of them have very unlikely, typically very small (much smaller than one) values.

By Bayesian inference, such small values are unlikely, and this simply poses a problem that is in principle the same as the 5-sigma excess of diphoton states that could be explained by the Higgs boson – or any other experimental observations that is used as evidence for new phenomena in any context of natural sciences.

Every meaningful paradigm must say at least qualitatively what the parameters of the theories may be and what they may not be. For dimensionless parameters, there must be a normalizable statistical distribution that the scientist assumes, otherwise he doesn't know what he is doing. To say the least, such a statistical distribution must follow from a deeper theory.

The fine-structure constant \(\alpha\approx 1/137.06\) should ideally be calculable from a deeper theory. But in the absence of a precise calculation, one should still ask for at least a more modest, approximate explanation – one that gives an order-of-magnitude estimate for this constant and analogously for the more extremely fine-tuned constants in the list above.

The problem is that even the order-of-magnitude estimates seem to be vastly wrong in most cases. The tiny values are extremely unlikely and according to Bayesian inference, tiny likelihoods of the observations according to a theory translate to a tiny likelihood of the theory itself! That's true regardless of the precise choice of the probability distribution, as long as the distribution is natural by itself. Normalizable, smooth, non-contrived distributions simply have to be similar to the uniform one on \((0,1)\) or the Gaussian around zero. The probability that a special condition such as \(|x|\lt 10^{-123}\) holds is tiny, about \(p\approx 10^{-123}\).

A deeper theory could indeed say that a dimensionless real parameter has the value \(\Lambda=10^{\pm 123}\) but every real scientist has the curiosity to ask "Why!?" If he could talk to God and God wanted to keep the answer classified, the scientist would insist: "But please, God, tell me at least roughlywhy such a tiny number arises." You can't really live without the question why.

So this fine-tuning problem is a problem in all the cases where Hossenfelder claims that no problem exists. And some of these problems have been given a solution that is almost universally accepted – especially inflationary cosmology that solves the flatness, monopole, and horizon problem. The monopole problem only exists if we adopt some grand unified or similar theory that implies that magnetic monopoles exist in principle. (String theory probably says that they must exist, it's a general principle of the same kind as e.g. the weak gravity conjecture.) And once they may exist, a generic early cosmology would probably generate too many of them, in contradiction with the observations (of zero magnetic monopoles so far). That's where inflation enters and dilutes the concentration of magnetic monopoles to a tiny density, in agreement with observations.

The flatness and horizon problems were severe and almost arbitrarily severe in the sense that the probability that the initial conditions would agree with the nearly flat and nearly uniform observations could go like \(10^{-V}\) where \(V\) is the volume of the Universe in some microscopic units. The greater part of the Universe you see, the more insanely unlikely the flatness or homogeneity would be. This would be unacceptable which is why some explanation – inflation or something that has almost identical implications – has to operate in Nature.

In the case of the dark energy and the hierarchy problem, the fine-tuning is even more surprising because the natural fundamental parameters aren't even close to zero – we could say that for some reason, the numbers very close to zero are more likely than the uniform distribution indicates. Instead, the natural parameter must be very precisely tuned close to some very special nonzero values because the finite, large part of these constants is compensated by loop effects in quantum field theory and similar phenomena based on quantum mechanics.

For this reason, the tiny value of the cosmological constant must be considered an experimental proof of some qualitative mechanism. We are not sure what the mechanism is but it could be the anthropic selection. The anthropic selection is unattractive but it could be considered a solution of the cosmological constant problem. The constant is tiny because it can be anything, it tries all values somewhere, and no observers arise in the Universe where the value is large because these values create worlds that are inhospitable for life. That's the anthropic explanation. If it is illegitimate, there must exist another one – probably a better one – but one that is comparably qualitative or philosophically far-reaching.

The baryon problem is a problem of the opposite type, in some sense, because we observe a much larger asymmetry between matter and antimatter than what would follow from simple theories and generic initial conditions. The observed matter-antimatter asymmetry in the Universe is therefore more or less an experimental proof of some special era in cosmology which created the asymmetry. If you had a theory that naturally predicts a symmetry between matter and antimatter in average, they would have largely annihilated with each other and the probability that as much matter survives as we see would again go like \(10^{-V}\). Although it's not a "logical" contradiction, the probability is zero in practice.

Hossenfelder says that none of those things are good research directions because we may fudge all the numbers and shut up. But that's exactly what a proper scientist will never be satisfied with. Alessandro wrote the following analogy:

A century ago somebody could have written "Atomic Masses It would be nice to have a way to derive the masses of the atoms from a standard model with fewer parameters, but there is nothing wrong with these masses just being what they are. Thus, not a good problem." Maybe particle masses are a good problem, maybe not.

Right. We could go further. The Universe was created by God and all the species are what they are, planetary orbits are slightly deformed circles, everything is what it is, the Pope is infallible, and you should shut up and stop asking any questions. But that's a religious position that curious scientists have never accepted. It's their nature that they cannot accept such answers because these answers are clearly no good. If something – like the observed suggestive patterns – seem extremely unlikely according to a theory that is being presented as a "shut up" final explanation, it's probably because it's not the final explanation. And scientists always wanted a better one. And they got very far. And the scientists in the present want to get even further – that's what their predecessors also wanted.

Hossenfelder doesn't have any curiosity. As a thinker, she totally sucks. She isn't interested in any problem of physics, let alone a deep one. She should have been led to Kinder, Küche, Kirche but instead, to improve their quotas, some evil people have violently pushed her into the world of physics, a world she viscerally hates and she has zero skills to deal with. She isn't interested in science – just like a generic stoner may be uninterested in science. He's so happy when he's high and he doesn't care whether the Earth is flat or round and whether the white flying thing is a cloud or an elephant. But that doesn't mean that physics or science or problems of state-of-the-art physics aren't interesting. It doesn't mean that all great minds unavoidably study these problems. Instead, it means that Hossenfelder and the stoner are lacking any intellectual value. They are creatures with dull, turned-off brains, mammals without curiosity, creativity, or a desire for a better understanding of Nature.

I find it extremely offensive that fake scientists such as Hossenfelder who are wrong about literally every single entry in this list – because they just articulate the most ordinary misconceptions of the laymen who have no clue about the field – are being marketed as real scientists by the fraudulent media. This industry is operated by the same scammers who like to prevent the father of the DNA from communicating his answers to the question whether the DNA code affects intelligence. It surely does, James Watson knows that, every scientifically literate person knows that, and everyone who doubts it is a moron or a spineless, opportunist, hypocritical poser.

Every competent physicist also knows that Hossenfelder's opinions on the promising research directions are pretty much 100% wrong and are only served to delude the laymen – while their effect on the actual researchers is zero.

Abstract. The global warming crisis is part of a bigger transformation in which humanity realizes that the Earth is a finite system and that our population, energy usage, and the like cannot continue to grow exponentially. If civilization survives this transformation, it will affect mathematics—and be affected by it—just as dramatically as the agricultural revolution or industrial revolution. We should get ready!

The slides are rather hard to see in the video, but you can read them here while you watch the talk. Click on links in green for more information!

January 12, 2019

I talked a bit on Twitter last night about the Past Hypothesis and the low entropy of the early universe. Responses reminded me that there are still some significant misconceptions about the universe (and the state of our knowledge thereof) lurking out there. So I’ve decided to quickly list, in Tweet-length form, some true facts about cosmology that might serve as a useful corrective. I’m also putting the list on Twitter itself, and you can see comments there as well.

The Big Bang model is simply the idea that our universe expanded and cooled from a hot, dense, earlier state. We have overwhelming evidence that it is true.

The Big Bang event is not a point in space, but a moment in time: a singularity of infinite density and curvature. It is completely hypothetical, and probably not even strictly true. (It’s a classical prediction, ignoring quantum mechanics.)

People sometimes also use “the Big Bang” as shorthand for “the hot, dense state approximately 14 billion years ago.” I do that all the time. That’s fine, as long as it’s clear what you’re referring to.

The Big Bang might have been the beginning of the universe. Or it might not have been; there could have been space and time before the Big Bang. We don’t really know.

Even if the BB was the beginning, the universe didn’t “pop into existence.” You can’t “pop” before time itself exists. It’s better to simply say “the Big Bang was the first moment of time.” (If it was, which we don’t know for sure.)

The Borde-Guth-Vilenkin theorem says that, under some assumptions, spacetime had a singularity in the past. But it only refers to classical spacetime, so says nothing definitive about the real world.

The universe did not come into existence “because the quantum vacuum is unstable.” It’s not clear that this particular “Why?” question has any answer, but that’s not it.

If the universe did have an earliest moment, it doesn’t violate conservation of energy. When you take gravity into account, the total energy of any closed universe is exactly zero.

The energy of non-gravitational “stuff” (particles, fields, etc.) is not conserved as the universe expands. You can try to balance the books by including gravity, but it’s not straightforward.

The universe isn’t expanding “into” anything, as far as we know. General relativity describes the intrinsic geometry of spacetime, which can get bigger without anything outside.

Inflation, the idea that the universe underwent super-accelerated expansion at early times, may or may not be correct; we don’t know. I’d give it a 50% chance, lower than many cosmologists but higher than some.

The early universe had a low entropy. It looks like a thermal gas, but that’s only high-entropy if we ignore gravity. A truly high-entropy Big Bang would have been extremely lumpy, not smooth.

Dark matter exists. Anisotropies in the cosmic microwave background establish beyond reasonable doubt the existence of a gravitational pull in a direction other than where ordinary matter is located.

We haven’t directly detected dark matter yet, but most of our efforts have been focused on Weakly Interacting Massive Particles. There are many other candidates we don’t yet have the technology to look for. Patience.

Dark energy may not exist; it’s conceivable that the acceleration of the universe is caused by modified gravity instead. But the dark-energy idea is simpler and a more natural fit to the data.

Dark energy is not a new force; it’s a new substance. The force causing the universe to accelerate is gravity.

We have a perfectly good, and likely correct, idea of what dark energy might be: vacuum energy, a.k.a. the cosmological constant. An energy inherent in space itself. But we’re not sure.

We don’t know why the vacuum energy is much smaller than naive estimates would predict. That’s a real puzzle.

Neither dark matter nor dark energy are anything like the nineteenth-century idea of the aether.

Feel free to leave suggestions for more misconceptions. If they’re ones that I think many people actually have, I might add them to the list.

January 10, 2019

The award-winning blogger beard Telescoper used to do astronomy look-a-likes, which unfortunately sometimes strayed into other fields. If he strayed a bit further I think he’d find a striking one in today’s news:

January 09, 2019

Hazardous waste comprises all types of waste with the potential to cause a harmful effect on the environment and pet and human health. It is generated from multiple sources, including industries, commercial properties and households and comes in solid, liquid and gaseous forms.

There are different local and state laws regarding the management of hazardous waste in different localities. Irrespective of your jurisdiction, the management starts from a proper hazardous waste collection from your Utah property through to its eventual disposal.

There are many methods of waste treatment after its collection using the appropriate structures recommended by environmental protection authorities. One of the most common and inexpensive ones is physical treatment. The following are the physical treatment options for hazardous wastewater.

Sedimentation

In this treatment technique, the waste is separated into a liquid and a solid. The solid waste particles in the liquid are left to settle at a container’s bottom through gravity. Sedimentation is done in a continuous or batch process.

Continuous sedimentation is the standard option and generally used for the treatment for large quantities of liquid waste. It is often used in the separation of heavy metals in the steel, copper and iron industries and fluoride in the aluminum industry.

Electro-Dialysis

This treatment method comprises the separation of wastewater into a depleted and aqueous stream. The wastewater passes through alternating cation and anion-permeable membranes in a compartment.

A direct current is then applied to allow the passage of cations and anions to opposite directions. This results in solutions with elevated concentrations of positive and negative ions and another with a low ion concentration.

Reverse Osmosis

This uses a semi-permeable membrane for the separation of dissolved organic and inorganic elements in wastewater. The wastewater is forced through the semi-permeable membrane by pressure, and larger molecules are filtered out by the small membrane pores.

Polyamide membranes have largely replaced polysulphone ones for wastewater treatment nowadays owing to their ability to withstand liquids with high pH. Reverse osmosis is usually used in the desalinization of brackish water and treating electroplating rinse waters.

Solvent Extraction

This involves the separation of the components of a liquid through contact with an immiscible liquid. The most common solvent used in the treatment technique is supercritical fluid (SCF) mainly CO2.

These fluids exist at the lowest temperature where condensation occurs and have a low density and fast mass ion transfer when mixed with other liquids. Solvent extraction is used for extracting oil from the emulsions used in steel and aluminum processing and organ halide pesticide from treated soil.

Superficial ethane as a solvent is also useful for the purification of waste oils contaminated with water, metals, and PCBs.

Some companies and household have tried handling their hazardous wastewater to minimize costs. This, in most cases, puts their employees at risk since the “treated” water is still often dangerous to human health, the environment and, their machines.

The physical processes above sometimes used with chemical treatment techniques are the guaranteed options for truly safe wastewater.

Hey, I'll admit it. I wouldn't have known about this 150th birthday of the periodic table if it weren't for this news article. ScienceNews has a lot more detail on the history and background of Mendeleev, who came up with the first periodic table.

Unfortunately, there might be a chance for a bit of inaccuracy here from the Miami Herald news article.

The periodic table lists the elements in order of their atomic weights, but when Mendeleev was classifying them, no one even knew what was inside these tiny things called atoms.

While it is true that, historically, Mendeleev originally arranged the elements with respect to each atom's atomic weight (since no one knew that was inside these atoms at that time), the periodic table that we have now lists the elements in order of their atomic number, i.e. the number of protons in the element. This is because we now know that an element of a particular atomic number may have several different isotopes (atomic weights). So the atomic weight is not a unique number for an element, but atomic number is. That is why the period table is arrange in order of the element's atomic number.

A paper on the arXiv this morning offers an explanation for an intriguing, long-standing anomalous result from the DAMA experiment. According to our current best model of how the universe hangs together, the Earth orbits the Sun within a galactic … Continue reading →

January 08, 2019

This blog entry is somewhat different than usual. Rather than writing about some particular research project, I will write about a general vibe, directing my research.

As usual, research starts with a 'why?'. Why does something happen, and why does it happen in this way? Being the theoretician that I am, this question often equates with wanting to have mathematical description of both the question and the answer.

Already very early in my studies I ran into peculiar problems with this desire. It usually left me staring at the words '...and then nature made a choice', asking myself, how could it? A simple example of the problem is a magnet. You all know that a magnet has a north pole and a south pole, and that these two are different. So, how does it happen which end of the magnet becomes the north pole and which the south pole? At the beginning you always get to hear that this is a random choice, and it just happens that one particular is made. But this is not really the answer. If you dig deeper than you find that originally the metal of any magnet has been very hot, likely liquid. In this situation, a magnet is not really magnetic. It becomes magnetic when it is cooled down, and becomes solid. At some temperature (the so-called Curie temperature), it becomes magnetic, and the poles emerge. And here this apparent miracle of a 'choice by nature' happens. Only that it does not. The magnet cools down not all by itself, but it has a surrounding. And the surrounding can have magnetic fields as well, e.g. the earth's magnetic field. And the decision what is south and what is north is made by how the magnet forms relative to this field. And thus, there is a reason. We do not see it directly, because magnets have usually moved since then, and thus this correlation is no longer obvious. But if we would heat the magnet again, and let it cool down again, we could observe this.

But this immediately leaves you with the question of where did the Earth's magnetic field comes from, and got its direction? Well, it comes from the liquid metallic core of the Earth, and aligns along or oppositely, more or less, the rotation axis of the Earth. Thus, the question is, how did the rotation axis of the Earth comes about, and why has it a liquid core? Both questions are well understood, and arise from how the Earth has formed billions of years ago. This is due to the mechanics of the rotating disk of dust and gas which formed around our fledgling sun. Which in turns comes from the dynamics on even larger scales. And so on.

As you see, whenever one had the feeling of a random choice, it was actually the outside of what we looked at so far, which made the decision. So, such questions always lead us to include more into what we try to understand.

'Hey', I now can literally hear people say who are a bit more acquainted with physics, 'does not quantum mechanics makes really random choices?'. The answer to this is yes and no in equal measures. This is probably one of the more fundamental problems of modern physics. Yes, our description of quantum mechanics, as we teach it also in courses, has intrinsic randomness. But when does it occur? Yes, exactly, whenever we jump outside of the box we describe in our theory. Real, random choice is encountered in quantum physics only whenever we transcend the system we are considering. E.g. by an external measurement. This is one of the reasons why this is known as the 'measurement problem'. If we stay inside the system, this does not happen. But at the expense that we are loosing the contact to things, like an ordinary magnet, which we are used to. The objects we are describing become obscure, and we talk about wave functions and stuff like this. Whenever we try to extend our description to also include the measurement apparatus, on the other hand, we again get something which is strange, but not as random as it originally looked. Although talking about it becomes almost impossible beyond any mathematical description. And it is not really clear what random means anymore in this context. This problem is one of the big ones in the concept of physics. While there is a relation to what I am talking about here, this question can still be separated.

And in fact, it is not this divide what I want to talk about, at least not today. I just wanted to get away with this type of 'quantum choice'. Rather, I want to get to something else.

If we stay inside the system we describe, then everything becomes calculable. Our mathematical description is closed in the sense that after fixing a theory, we can calculate everything. Well, at least in principle, in practice our technical capabilities may limit this. But this is of no importance for the conceptual point. Once we have fixed the theory, there is no choice anymore. There is no outside. And thus, everything needs to come from inside the theory. Thus, a magnet in isolation will never magnetize, because there is nothing which can make a decision about how. The different possibilities are caught in an eternal balanced struggle, and none can win.

Which makes a lot of sense, if you take physical theories really seriously. After all, one of the basic tenants is that there is no privileged frame of reference: 'Everything is relative'. If there is nothing else, nothing can happen which creates an absolute frame of reference, without violating the very same principles on which we found physics. If we take our own theories seriously, and push them to the bitter end, this is what needs to come about.

And here I come back to my own research. One of the driving principles has been to really push this seriousness. And ask what it implies if one really, really takes it seriously. Of course, this is based on the assumption that the theory is (sufficiently) adequate, but that is everyday uncertainty for a physicist anyhow. This requires me to very, very carefully separate what is really inside, and outside. And this leads to quite surprising results. Essentially most of my research on Brout-Englert-Higgs physics, as described in previous entries, is coming about because of this approach. And leads partly to results quite at odds with common lore, often meaning a lot of work to convince people. Even if the mathematics is valid and correct, interpretation issues are much more open to debate when it comes to implications.

Is this point of view adequate? After all, we know for sure that we are not yet finished, and our theories do not contain all there is, and there is an 'outside'. However it may look. And I agree. But, I think it is very important that we very clearly distinguish what is an outside influence, and what is not. And as a first step to ensure what is outside, and thus, in a sense, is 'new physics', we need to understand what our theories say if they are taken in isolation.

Of course this is my own idiosyncratic take on the subject: obviously algebraic geometers have their own pefectly fine notion of what these things are good for. But I never got the hang of that.

Today I want to talk about how the Veronese embedding can be used to ‘clone’ a classical system. For any number k, you can take a classical system and build a new one; a state of this new system is k copies of the original system constrained to all be in the same state! This may not seem to do much, but it does something: for example, it multiplies the Kähler structure on the classical state space by k. And it has a quantum analogue, which has a much more notable effect!

Last time I looked at an example, where I built the spin-3/2 particle by cloning the spin-1/2 particle.

In brief, it went like this. The space of classical states of the spin-1/2 particle is the Riemann sphere, This just happens to also be the space of quantum states of the spin-1/2 particle, since it’s the projectivization of To get the 3/2 particle we look at the map

You can think of this as the map that ‘triplicates’ a spin-1/2 particle, creating 3 of them in the same state. This gives rise to a map between the corresponding projective spaces, which we should probably call

It’s an embedding.

Algebraic geometers call the image of this embedding the twisted cubic, since it’s a curve in 3d projective space described by homogeneous cubic equations. But for us, it’s the embedding of the space of classical states of the spin-3/2 particle into the space of quantum states. (The fact that classical states give specially nice quantum states is familiar in physics, where these specially nice quantum states are called ‘coherent states’, or sometimes ‘generalized coherent states’.)

Now, you’ll have noted that the numbers 2 and 3 show up a bunch in what I just said. But there’s nothing special about these numbers! They could be arbitrary natural numbers… well, > 1 if we don’t enjoy thinking about degenerate cases.

Here’s how the generalization works. Let’s think of guys in as linear functions on the dual of this space. We can raise any one of them to the k power and get a homogeneous polynomial of degree k. The space of such polynomials is called so raising to the kth power defines a map

This in turn gives rise to a map between the corresponding projective spaces:

This map is an embedding, since different linear functions give different polynomials when you raise them to the k power, at least if And this map is famous: it’s called the kVeronese embedding. I guess it’s often denoted

An important special case occurs when we take as we’d been doing before. The space of homogeneous polynomials of degree k in two variables has dimension so we can think of the Veronese embedding as a map

embedding the projective line as a curve in This sort of curve is called a rational normal curve. When it’s our friend from last time, the twisted cubic.

In general, we can think of as the space of quantum states of the spin-k/2 particle, since we got it from projectivizing the spin-k/2 representation of namely Sitting inside here, the rational normal curve is the space of classical states of the spin-k/2 particle—or in other words, ‘coherent states’.

Maybe I should expand on this, since it flew by so fast! Pick any direction you want the angular momentum of your spin-k/2 particle to point. Think of this as a point on the Riemann sphere and think of that as coming from some vector That describes a quantum spin-1/2 particle whose angular momentum points in the desired direction. But now, form the tensor product

This is completely symmetric under permuting the factors, so we can think of it as a vector in And indeed, it’s just what I was calling

This vector describes a collection of k indistinguishable quantum spin-1/2 particles with angular momenta all pointing in the same direction. But it also describes a single quantum spin-k/2 particle whose angular momentum points in that direction! Not all vectors in are of this form, clearly. But those that are, are called ‘coherent states’.

Now, let’s do this all a bit more generally. We’ll work with not just And we’ll use a variety as our space of classical states, not necessarily all of

Remember, we’ve got:

• a category where the objects are linearly normal subvarieties for arbitrary

and

• a category where the objects are linear subspaces for arbitrary

The morphisms in each case are just inclusions. We’ve got a ‘quantization’ functor

that maps to the smallest whose projectivization contains And we’ve got what you might call a ‘classicization’ functor going back:

We actually call this ‘projectization’, since it sends any linear subspace to its projective space sitting inside .

We would now like to get the Veronese embedding into the game, copying what we just did for the spin-k/2 particle. We’d like each Veronese embedding to define a functor from to and also a functor to For example, the first of these should send the space of classical states of the spin-1/2 particle to the space of classical states of the spin-k/2 particle. The second should do the same for the space of quantum states.

The quantum version works just fine. Here’s how it goes. An object in is a linear subspace

for some Our functor should send this to

Here , pronounced ‘n multichoose k’ , is the number of ways to choose k not-necessarily-distinct items from a set of n, since this is the dimension of the space of degree-k homogeneous polynomials on (We have to pick some sort of ordering on monomials to get the isomorphism above; this is one of the clunky aspects of our current framework, which I plan to fix someday.)

This process indeed defines functor, and the only reasonable name for it is

Intuitively, it takes any state space of any quantum system and produces the state space for k indistinguishable copies that system. (If you’re a physicist, muttering the phrase ‘identical bosons’ may clarify things. There is also a fermionic version where we use exterior powers instead of symmetric powers, but let’s not go there now.)

The classical version of this functor suffers from a small glitch, which however is easy to fix. An object in is a linearly normal subvariety

for some Applying the kth Veronese embedding we get a subvariety

However, I don’t think this is linearly normal, in general. I think it’s linearly normal iff is k-normal. You can take this as a definition of k-normality, if you like, though there are other equivalent ways to say it.

Luckily, a projectively normal subvariety of projective space is k-normal for all And even better, projectively normal varieties are fairly common! In particular, any projective space is a projectively normal subvariety of itself.

So, we can redefine the category by letting objects be projectively normal subvarieties for arbitrary I’m using the same notation for this new category, which is ordinarily a very dangerous thing to do, because all our results about the original version are still true for this one! In particular, we still have adjoint functors

defined exactly as before. But now the kth Veronese embedding gives a functor

Intuitively, this takes any state space of any classical system and produces the state space for k indistinguishable copies that system that are all in the same state. It has no effect on the classical state space as an abstract variety, just its embedding into projective space—which in turn affects its Kähler structure and the line bundle it inherits from projective space. In particular, its symplectic structure gets multiplied by k, and the line bundle over it gets replaced by its kth tensor power. (These are well-known facts about the Veronese embedding.)

I believe that this functor obeys

and it’s just a matter of unraveling the definitions to see that

So, very loosely, the functors

should be thought of as replacing a classical or quantum system by a new ‘cloned’ version of that system. And they get along perfectly with quantization and its adjoint, projectivization!

• Part 1: the mystery of geometric quantization: how a quantum state space is a special sort of classical state space.

• Part 2: the structures besides a mere symplectic manifold that are used in geometric quantization.

• Part 3: geometric quantization as a functor with a right adjoint, ‘projectivization’, making quantum state spaces into a reflective subcategory of classical ones.

January 06, 2019

You have landed on this page because your HTTP client used TLSv1.0 to connect to this server.
TLSv1.0 is deprecated and support for it is being
dropped from both servers and
browsers.

We are planning to drop support for
TLSv1.0 from this server in the near future. Other sites you visit have probably already done so, or will do so soon.
Accordingly, please upgrade your client to one that supports at least TLSv1.2. Since TLSv1.2 has been around for more than a decade, this
should not be hard.

You have landed on this page because your HTTP client used TLSv1.0 to connect to this server.
TLSv1.0 is deprecated and support for it is being
dropped from both servers and
browsers.

We are planning to drop support for
TLSv1.0 from this server in the near future. Other sites you visit have probably already done so, or will do so soon.
Accordingly, please upgrade your client to one that supports at least TLSv1.2. Since TLSv1.2 has been around for more than a decade, this
should not be hard.

January 05, 2019

We are writing to let you know about a fantastic opportunity to learn about the emerging interdisciplinary field of applied category theory from some of its leading researchers at the ACT2019 School. It will begin February 18, 2019 and culminate in a meeting in Oxford, July 22–26. Applications are due January 30th; see below for details.

Applied category theory is a topic of interest for a growing community of researchers, interested in studying systems of all sorts using category-theoretic tools. These systems are found in the natural sciences and social sciences, as well as in computer science, linguistics, and engineering. The background and experience of our community’s members is as varied as the systems being studied.

The goal of the ACT2019 School is to help grow this community by pairing ambitious young researchers together with established researchers in order to work on questions, problems, and conjectures in applied category theory.

Who should apply

Anyone from anywhere who is interested in applying category-theoretic methods to problems outside of pure mathematics. This is emphatically not restricted to math students, but one should be comfortable working with mathematics. Knowledge of basic category-theoretic language—the definition of monoidal category for example—is encouraged.

We will consider advanced undergraduates, PhD students, and post-docs. We ask that you commit to the full program as laid out below.

Instructions for how to apply can be found below the research topic descriptions.

Senior research mentors and their topics

Below is a list of the senior researchers, each of whom describes a research project that their team will pursue, as well as the background reading that will be studied between now and July 2019.

Miriam Backens

Title: Simplifying quantum circuits using the ZX-calculus

Description: The ZX-calculus is a graphical calculus based on the category-theoretical formulation of quantum mechanics. A complete set of graphical rewrite rules is known for the ZX-calculus, but not for quantum circuits over any universal gate set. In this project, we aim to develop new strategies for using the ZX-calculus to simplify quantum circuits.

Tobias Fritz

Description: We all know that 2+2+1+1 evaluates to 6. A less familiar notion is that it can partially evaluate to 5+1. In this project, we aim to study the compositional structure of partial evaluation in terms of monads and the bar construction and see what this has to do with financial risk via second-order stochastic dominance.

Pieter Hofstra

Title: Complexity classes, computation, and Turing categories

Description: Turing categories form a categorical setting for studying computability without bias towards any particular model of computation. It is not currently clear, however, that Turing categories are useful to study practical aspects of computation such as complexity. This project revolves around the systematic study of step-based computation in the form of stack-machines, the resulting Turing categories, and complexity classes. This will involve a study of the interplay between traced monoidal structure and computation. We will explore the idea of stack machines qua programming languages, investigate the expressive power, and tie this to complexity theory. We will also consider questions such as the following: can we characterize Turing categories arising from stack machines? Is there an initial such category? How does this structure relate to other categorical structures associated with computability?

Bartosz Milewski

Title: Traversal optics and profunctors

Description: In functional programming, optics are ways to zoom into a specific part of a given data type and mutate it. Optics come in many flavors such as lenses and prisms and there is a well-studied categorical viewpoint, known as profunctor optics. Of all the optic types, only the traversal has resisted a derivation from first principles into a profunctor description. This project aims to do just this.

Mehrnoosh Sadrzadeh

Title: Formal and experimental methods to reason about dialogue and discourse using categorical models of vector spaces

Description: Distributional semantics argues that meanings of words can be represented by the frequency of their co-occurrences in context. A model extending distributional semantics from words to sentences has a categorical interpretation via Lambek’s syntactic calculus or pregroups. In this project, we intend to further extend this model to reason about dialogue and discourse utterances where people interrupt each other, there are references that need to be resolved, disfluencies, pauses, and corrections. Additionally, we would like to design experiments and run toy models to verify predictions of the developed models.

David Spivak

Title: Toward a mathematical foundation for autopoiesis

Description: An autopoietic organization—anything from a living animal to a political party to a football team—is a system that is responsible for adapting and changing itself, so as to persist as events unfold. We want to develop mathematical abstractions that are suitable to found a scientific study of autopoietic organizations. To do this, we’ll begin by using behavioral mereology and graphical logic to frame a discussion of autopoeisis, most of all what it is and how it can be best conceived. We do not expect to complete this ambitious objective; we hope only to make progress toward it.

School structure

All of the participants will be divided up into groups corresponding to the projects. A group will consist of several students, a senior researcher, and a TA. Between January and June, we will have a reading course devoted to building the background necessary to meaningfully participate in the projects. Specifically, two weeks are devoted to each paper from the reading list. During this two week period, everybody will read the paper and contribute to discussion in a private online chat forum. There will be a TA serving as a domain expert and moderating this discussion. In the middle of the two week period, the group corresponding to the paper will give a presentation via video conference. At the end of the two week period, this group will compose a blog entry on this background reading that will be posted to the n-category cafe.

After all of the papers have been presented, there will be a two-week visit to Oxford University, 15–26 July 2019. The second week is solely for participants of the ACT2019 School. Groups will work together on research projects, led by the senior researchers.

The first week of this visit is the ACT2019 Conference, where the wider applied category theory community will arrive to share new ideas and results. It is not part of the school, but there is a great deal of overlap and participation is very much encouraged. The school should prepare students to be able to follow the conference presentations to a reasonable degree.

To apply

To apply please send the following to act2019school@gmail.com by January 30th, 2019:

Your CV

A document with:

An explanation of any relevant background you have in category theory or any of the specific projects areas

The date you completed or expect to complete your Ph.D and a one-sentence summary of its subject matter.

Order of project preference

To what extent can you commit to coming to Oxford (availability of funding is uncertain at this time)

A brief statement (~300 words) on why you are interested in the ACT2019 School. Some prompts:

how can this school contribute to your research goals?

how can this school help in your career?

Also have sent on your behalf to act2019school@gmail.com a brief letter of recommendation confirming any of the following:

January 04, 2019

I spent most of the past two days in the “Arts 2” building of Queen Mary University of London, on Mile End Road. According to Wikipedia, Mile End was one of the earliest suburbs of London, recorded in 1288 as … Continue reading →

There was a time when you wouldn’t catch sight of this academic in Ireland over Christmas – I used to head straight for the ski slopes as soon as term ended. But family commitments and research workloads have put paid to that, at least for a while, and I’m not sure it’s such a bad thing. Like many academics, I dislike being away from the books for too long and there is great satisfaction to be had in catching up on all the ‘deep roller’ stuff one never gets to during the teaching semester.

The professor in disguise in former times

The first task was to get the exam corrections out of the way. This is a job I quite enjoy, unlike most of my peers. I’m always interested to see how the students got on and it’s the only task in academia that usually takes slightly less time than expected. Then it was on to some rather more difficult corrections – putting together revisions to my latest research paper, as suggested by the referee. This is never a quick job, especially as the points raised are all very good and some quite profound. It helps that the paper has been accepted to appear in Volume 8 of the prestigious Einstein Studies series, but this is a task that is taking some time.

Other grown-up stuff includes planning for upcoming research conferences – two abstracts now in the post, let’s see if they’re accepted. I also spent a great deal of the holidays helping to organize an international conference on the history of physics that will be hosted in Ireland in 2020. I have very little experience in such things, so it’s extremely interesting, if time consuming.

So there is a lot to be said for spending Christmas at home, with copious amounts of study time uninterrupted by students or colleagues. An interesting bonus is that a simple walk in the park or by the sea seems a million times more enjoyable after a good morning’s swot. I’ve never really holidayed well and I think this might be why.

A walk on Dun Laoghaire pier yesterday afternoon

As for New Year’s resolutions, I’ve taken up Ciara Kelly’s challenge of a brisk 30-minute walk every day. I also took up tennis in a big way a few months ago – now there’s a sport that is a million times more practical in this part of the world than skiing.

January 03, 2019

Solar panels and hot water systems are great ways to save some serious cash when it comes to your energy bills. You make good use of the sun, which means that you are helping with maximizing the use of natural resources rather than creating unnatural ones, which can usually harm the Earth in the long run.

However, solar panel systems should be properly used and maintained to make sure that you are making the most out of it. Most users and owners of the system do not know how to properly use it, which is a huge waste of energy and money.

Make Use of Boiler Timers and Solar Controllers

Ask your solar panel supplier if they can provide you with boiler timers and solar controllers. This is to make sure that the water will only be heated by the backup heating source, which is most likely after the water is heated by the sun to the maximum extent. It usually happens after the solar panels are not directly exposed to the sun, which means that this usually takes place late in the afternoon or whenever the sun changes its position.

You should also see to it that the cylinder has enough cold water for the sun to heat up after you have used up all of the hot water. This is to ensure that you will have hot water to use for the next day, which is especially important if you use hot water in the morning.

Check the Cylinder and Pipes Insulation

After having the solar panels and hot water system installed on your home, you should see to it that the cylinder and pipes are properly insulated. Failure to do so will result in inadequate hot water, making the system inefficient.

Solar panel systems that do not have insulated cylinders will not heat up your water enough, so make sure to ask the supplier and the people handling the installation about this to make the most out of your system.

Do Not Overfill the Storage

Avoid filling the hot water vessel to the brim, as doing so can make the system inefficient. Aside from not getting the water as hot as you want it to be, you will risk the chance of having the system break down sooner than you expect.

Ask the supplier or the people installing the system to install a twin coil cylinder. This will allow the solar hot water system to heat up only one section of the coil cylinder, which is usually what the solar collector or thermal store is for.

In cases wherein the dedicated solar volume is not used, the timing of the backup heating will have a huge impact on the solar hot water system’s performance. This usually happens in systems that do not require the current cylinder to be changed.

Knowing how to properly use and maintain your solar hot water system is a huge time and money saver. It definitely would not hurt to ask questions from your solar panel supplier and installer, so make sure to ask them the questions that you have in mind. Enjoy your hot water and make sure to have your system checked every once in a while!

January 01, 2019

It is unfortunate that my first post of the New Year is about a sad news from Dec. of 2018. Prominent Standford physicist, Shoucheng Zhang passed away in early Dec. of an apparent suicide. He was only 55, and according to his family, has been suffering from bouts of depression. But what triggers this report is the possible connection between him and US-China relation, which, btw, is purely a rumor right now.

Zhang was originally recruited in 2008 under the Thousand Talents program — a CCP effort to attract top scientists from overseas to work in China — to conduct research at Tsinghua University in Beijing. Zhang was active in helping U.S.-trained Chinese researchers return home, and expressed his desire to help “bring back the front-lines of research to China” in a recent interview with Chinese news portal Sina.

Zhang’s venture capital firm Digital Horizon Capital (DHVC), formerly known as Danhua Capital, was recently linked to China’s “Made in China 2025” technology dominance program in a Nov. 30 U.S. Trade Representative (USTR) report. According to the report, venture capital firms like DHVC are ultimately aimed at allowing China to access vital technology from U.S. startups. Zhang’s firm lists 113 U.S. companies in its portfolio, most falling within emerging sectors that the Chinese government has identified as strategic priorities.

The “Made in China 2025” program combines economic espionage and aggressive business acquisitions to aid China’s quest to become a tech manufacturing superpower, the USTR report continues. The program was launched in 2015 and has been cited by the Trump administration as evidence that the Chinese government is engaged in a strategic effort to steal American technological expertise.

I have absolutely no knowledge on any of these. I can only mourn the brilliant mind that we have lost.I first heard of "S.C. Zhang" when I was still working as a grad student in condensed matter physics, especially on the high-Tc superconductors. He published this paper in Science, authored by him alone, on the SO5 symmetry for the basis of a unified theory of superconductivity and antiferromagntism[1]. That publication created quite a shakeup in condensed matter theory world at that time.

It was a bit later that I learned that he came out of an expertise in elementary particle physics, and switched fields to go dabble into condensed matter (see, kids? I told you that various topics in physics are connected and interrelated!). Of course, his latest ground-breaking work was the initial proposal for topological insulators[2]. This was Nobel Prize-caliber work, in my opinion.Besides that, I've often cited one of his writings when the issue of emergent phenomena comes up.[3] As someone with a training in high energy/elementary particle, he definitely had the expertise to talk about both sides of the coin: reductionism versus emergent phenomenon.Whatever the circumstances are surrounding his death, we have lost a brilliant physicist. If topological insulators become the rich playground for physicists and engineers in the years to come, as it is expected to, I hope the world remembers his name as someone who was responsible for this advancement.Zz.

December 31, 2018

I was thinking about droppingsupport for TLSv1.0 in this webserver. All the major browser vendors have announced that they are dropping it from their browsers. And you’d think that since TLSv1.2 has been around for a decade, even very old clients ought to be able to negotiate a TLSv1.2 connection.

But, when I checked, you can imagine my surprise that this webserver receives a ton of TLSv1 connections… including from the application that powers Planet Musings. Yikes!

The latter is built around the Universal Feed Parser which uses the standard Python urrlib2 to negotiate the connection. And therein lay the problem …

Even if I’m still supporting TLSv1.0, others have already dropped support for it.

Now, you might find it strange that urllib2 defaults to a TLSv1.0 connection, when it’s certainly capable of negotiating something more secure (whatever OpenSSL supports). But, prior to Python 2.7.9, urllib2didn’t even check the server’s SSL certificate. Any encryption was bogus (wide open to a MiTM attack). So why bother negotiating a more secure connection?

Switching from the system Python to Python 2.7.15 (installed by Fink) yielded a slew of

Actually, the lines in red aren’t strictly necessary. As long as you set a ssl.SSLContext(), a suitable set of root certificates gets loaded. But, honestly, I don’t trust the internals of urllib2 to do the right thing anymore, so I want to make sure that a well-curated set of root certificates is used.

Update:

This article goes some of the way towards explaining the brokenness of Python’s TLS implementation on MacOSX. But only some of the way …

Update 2:

Another offender turned out to be the very application (MarsEdit 3) that I used to prepare this post. Upgrading to MarsEdit 4 was a bit of a bother. Apple’s App-sandboxing prevented my Markdown+itex2MMLtext filter from working. One is no longer allowed to use IPC::Open2 to pipe text through the commandline itex2MML. So I had to create a Perl Extension Module for itex2MML. Now there’s a MathML::itex2MML module on CPAN to go along with the Rubygem.

December 29, 2018

I am dismayed by the plethora of null results coming out of my experiment, as well as from our friendly rivals, at the Large Hadron Collider. Don’t get me wrong, null results are important and it is a strength of … Continue reading →

Electric Vehicles (EVs) are seen as the future of the automotive industry. With sales projected at $30 million by 2030, electric cars are slowly but surely taking over their market. The EV poster boy, The Tesla Model S, is a consistent frontrunner in luxury car sales. However, there are still doubts about the electric car’s environmental benefits.

Established names like General Motors, Audi, and Nissan are all hopping on the electric vehicle wave. Competition has made EVs more attractive to the public. This is so in spite of threats from the government to cut federal tax credits on electric cars. Fluctuating prices for battery components like graphite may also be a concern. Some states in the US like California and New York plan on banning the sale of cars with internal combustion by 2050. Should you take the leap to go full electric?

Cost

The Tesla Model S starts at $75,700 and the SUV Model X at $79,500. There are many affordable options for your budget. The 2018 Ford Focus Electric, Hyundai Ioniq Electric, and Nissan Leaf start well under $30,000. Tesla even has the $35,000 Model 3, for those who want to experience the brand’s offerings for a lower price.

The Chevrolet Bolt EV ($36,620) is also a favorite among those who want to make use of the $7,500 tax credit. The tax credit brings the Bolt EV’s price into the sub $30,000 range.

EVs still cost more than their gasoline-powered counterparts up front. The regular 2018 Ford Focus starts at $18,825, about $10,000 cheaper than its electric sibling. Even if this is the case, electric cars still cost less to fuel.

Charging Options

EV charging has three levels:

Level one uses your wall outlet to charge. Most electric cars come with level 1 chargers that you can plug into the nearest socket. This is the slowest way to charge your EV. You’ll have to leave it charging overnight to top it up.

Level two is what you would commonly find on public charging stations. It’s faster than a level 1 charger, taking about three to eight hours to recharge. You can also have a level 2 charger installed in your home with a permit and the help of an electrician.

Level three or DC Fast Charge (DCFC) stations are usually found in public as well. DCFCs can fully charge a vehicle in the span of 20 minutes to one hour.

There are currently 23,809 electric vehicle charging stations in the USA and Canada. Some argue that this amount is meager compared to the 168,000 gas stations in the same area. Loren McDonald from CleanTechnica says this isn’t really a problem since electric vehicles still take up less than 0.29% of the automobiles in the US.

McDonald also argued that most of the charging would be done at home. There are still continuous efforts to build charging stations to suit the needs of electric car users across the country.

The Bumpy Road Ahead

Despite its promise of a greener drive for everyone, electric cars have received their fair share of scrutiny from environmentalists, as well. The Fraunhofer Institute of Building Physics stated that the energy needed to make an electric vehicle is more than double what it takes to make a conventional one because of its battery.

The International Council on Clean Transportation, however, says that battery manufacturing emissions may be similar to the ones from internal combustion engine manufacturing. The only difference is that electric cars don’t produce as much greenhouse gases as conventional ones do in the long run. The ICCT also says that with efforts to reduce the use of carbon in power sources, emissions from battery manufacturing will decrease by around 17%.

Electric vehicles are becoming more accessible. Manufacturers are creating electric versions of existing models. They’re even sponsoring electric charging stations around the country. With moves to use cleaner energy in manufacturing, it only makes sense to switch. You can do your part now and get yourself an EV with more affordable options available.

It also makes sense to wait for more competition to drive prices down if you don’t have the cash now. Either way, it’s not a matter of “if” but “when” you’ll switch to an EV for the greater good.

December 27, 2018

Many Americans take medications on a daily basis. A survey by Consumer Reports reveals that more than half of the population have a prescribed drug. Among this group, a good number consume more than three. Others are taking prescribed medications along with over-the-counter medicines, vitamins, and other forms of supplements.

As Americans get older and more drugs become available to manage or cure diseases, this percentage of consumers is also likely to increase. This trend then brings up one of the underrated medicinal concerns. How possible is drug contamination?

How Common Is Drug Contamination?

Drug contamination incidents do not happen all the time. In fact, they tend to be rare. This is because companies implement strict process regulations and quality control. On the downside, once they do, the implications are severe.

This problem can happen in different ways. One of these is through tampering. People can recall the Chicago Tylenol murders that occurred in 1982. During this time, seven people died after ingesting acetaminophen laced with potassium cyanide.

It can also happen at the manufacturing level. A good example is the 2009 contamination of a Sanofi Genzyme plant in Massachusetts. The manufacturers detected a viral presence in one of its bioreactors used to produce Cerezyme, a drug used to treat type 1 Gaucher disease. Although the threat cannot cause human infection, it impaired the cell’s viability.

Due to this incident, the company had to write off close to $30 million worth of products and lose over $300 million in revenue. Since it’s also one of the few manufacturers of medication for this disease, the shutdown led to a supply shortage for more than a year.

Using Fluid Chromatography as a Solution

To ensure the viability and safety of the medications, companies such as selerity.com provide supercritical and high-performance fluid chromatography.

Chromatography is the process of breaking down a product, such as a drug, into its components. In fluid or liquid chromatography, dissolved molecules or ions separate depending on how they interact with the mobile and the stationary phases.

In turn, the pharmaceutical analysts can determine the level of purity of the drug as well as the presence of minute traces of contaminants. Besides quality control, the technique can enable scientists to find substances that can be helpful in future research.

The level of accuracy of this test is already high, thanks to the developments of the equipment. Still, it’s possible they cannot detect all types of compounds due to factors such as thermal instability.

Newer or modern types of machines can have programmable settings. These allow the users to set the temperatures to extremely high levels or subzero conditions. They can also have features that will enable users to replicate the test many times at the same level of consistency.

It takes more than chromatography to prevent the contamination of medications, especially during manufacturing. It also requires high-quality control standards and strict compliance with industry protocols. Chromatography, though, is one useful way to safeguard the health of drug consumers in the country.

We’re now pleased to advertise our preliminary list of invited speakers, together with key dates for others who’d like to give talks.

The preliminary Invited Speakers include three of your Café hosts, and are as follows:

John Baez (Riverside)

Neil Ghani (Strathclyde)

Marco Grandis (Genoa)

Simona Paoli (Leicester)

Emily Riehl (Johns Hopkins)

Mike Shulman (San Diego)

Manuela Sobral (Coimbra)

Further invited speakers are to be confirmed.

Contributed talks We are offering an early round of submissions and decisions to allow for those who need an early decision (e.g. for funding purposes) or want preliminary feedback for a possible
resubmission. The timetable is as follows:

The first International Conference on Homotopy Type Theory, HoTT 2019, will take place from August 12th to 17th, 2019 at Carnegie Mellon University in Pittsburgh, USA. Here is the organizers’ announcement:

The invited speakers will be:

Ulrik Buchholtz (TU Darmstadt, Germany)

Dan Licata (Wesleyan University, USA)

Andrew Pitts (University of Cambridge, UK)

Emily Riehl (Johns Hopkins University, USA)

Christian Sattler (University of Gothenburg, Sweden)

Karol Szumilo (University of Leeds, UK)

Submissions of contributed talks will open in January and conclude in March; registration will open sometime in the spring.

There will also be an associated Homotopy Type Theory Summer School in the preceding week, August 7th to 10th.

The topics and instructors are:

Cubical methods: Anders Mortberg

Formalization in Agda: Guillaume Brunerie

Formalization in Coq: Kristina Sojakova

Higher topos theory: Mathieu Anel

Semantics of type theory: Jonas Frey

Synthetic homotopy theory: Egbert Rijke

We expect some funding to be available for students to attend the summer school and conference.

I have a question about the relationship between Lawvere theories and monads.

Every morphism of Lawvere theories <semantics>f:T→T′<annotation encoding="application/x-tex">f \colon T \to T'</annotation></semantics> induces a morphism of monads <semantics>Mf:MT⇒MT′<annotation encoding="application/x-tex">M_f \colon M_T \Rightarrow M_{T^'}</annotation></semantics> which can be calculated by using the universal property of the coend formula for <semantics>MT<annotation encoding="application/x-tex">M_T</annotation></semantics>. (This can be found in Hyland and Power’s paper Lawvere theories and monads.)

On the other hand <semantics>f:T→T′<annotation encoding="application/x-tex">f \colon T \to T'</annotation></semantics> gives a functor <semantics>f*:Mod(T′)→Mod(T)<annotation encoding="application/x-tex">f^\ast \colon Mod(T') \to Mod(T)</annotation></semantics> given by precomposition with <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>. Because everything is nice enough, <semantics>f*<annotation encoding="application/x-tex">f^\ast</annotation></semantics> always has a left adjoint <semantics>f*:Mod(T)→Mod(T′)<annotation encoding="application/x-tex">f_\ast \colon Mod(T) \to Mod(T')</annotation></semantics>. (Details of this can be found in Toposes, Triples and Theories.)

My question is the following:

What relationship is there between the left adjoint <semantics>f*:Mod(T)→Mod(T′)<annotation encoding="application/x-tex">f_\ast \colon Mod(T) \to Mod(T')</annotation></semantics> and the morphism of monads computed using coends <semantics>Mf:MT⇒MT′<annotation encoding="application/x-tex">M_f \colon M_T \Rightarrow M_{T^'}</annotation></semantics>?

In the examples I can think of the components of <semantics>Mf<annotation encoding="application/x-tex">M_f</annotation></semantics> are given by the unit of the adjunction between <semantics>f*<annotation encoding="application/x-tex">f^\ast</annotation></semantics> and <semantics>f*<annotation encoding="application/x-tex">f_\ast</annotation></semantics> but I cannot find a reference explaining this. It doesn’t seem to be in Toposes, Triples, and Theories.

What is the shape of an electron? If you recall pictures from your high school science books, the answer seems quite clear: an electron is a small ball of negative charge that is smaller than an atom. This, however, is quite far from the truth.

A simple model of an atom with the nucleus of made of protons, which have a positive charge, and neutrons, which are neutral. The electrons, which have a negative charge, orbit the nucleus.Vector FX / Shutterstock.com

The electron is commonly known as one of the main components of atoms making up the world around us. It is the electrons surrounding the nucleus of every atom that determine how chemical reactions proceed. Their uses in industry are abundant: from electronics and welding to imaging and advanced particle accelerators. Recently, however, a physics experiment called Advanced Cold Molecule Electron EDM (ACME) put an electron on the center stage of scientific inquiry. The question that the ACME collaboration tried to address was deceptively simple: What is the shape of an electron?

Classical and quantum shapes?

As far as physicists currently know, electrons have no internal structure – and thus no shape in the classical meaning of this word. In the modern language of particle physics, which tackles the behavior of objects smaller than an atomic nucleus, the fundamental blocks of matter are continuous fluid-like substances known as “quantum fields” that permeate the whole space around us. In this language, an electron is perceived as a quantum, or a particle, of the “electron field.” Knowing this, does it even make sense to talk about an electron’s shape if we cannot see it directly in a microscope – or any other optical device for that matter?

To answer this question we must adapt our definition of shape so it can be used at incredibly small distances, or in other words, in the realm of quantum physics. Seeing different shapes in our macroscopic world really means detecting, with our eyes, the rays of light bouncing off different objects around us.

Simply put, we define shapes by seeing how objects react when we shine light onto them. While this might be a weird way to think about the shapes, it becomes very useful in the subatomic world of quantum particles. It gives us a way to define an electron’s properties such that they mimic how we describe shapes in the classical world.

What replaces the concept of shape in the micro world? Since light is nothing but a combination of oscillating electric and magnetic fields, it would be useful to define quantum properties of an electron that carry information about how it responds to applied electric and magnetic fields. Let’s do that.

Electrons in electric and magnetic fields

As an example, consider the simplest property of an electron: its electric charge. It describes the force – and ultimately, the acceleration the electron would experience – if placed in some external electric field. A similar reaction would be expected from a negatively charged marble – hence the “charged ball” analogy of an electron that is in elementary physics books. This property of an electron – its charge – survives in the quantum world.

Likewise, another “surviving” property of an electron is called the magnetic dipole moment. It tells us how an electron would react to a magnetic field. In this respect, an electron behaves just like a tiny bar magnet, trying to orient itself along the direction of the magnetic field. While it is important to remember not to take those analogies too far, they do help us see why physicists are interested in measuring those quantum properties as accurately as possible.

What quantum property describes the electron’s shape? There are, in fact, several of them. The simplest – and the most useful for physicists – is the one called the electric dipole moment, or EDM.

In classical physics, EDM arises when there is a spatial separation of charges. An electrically charged sphere, which has no separation of charges, has an EDM of zero. But imagine a dumbbell whose weights are oppositely charged, with one side positive and the other negative. In the macroscopic world, this dumbbell would have a non-zero electric dipole moment. If the shape of an object reflects the distribution of its electric charge, it would also imply that the object’s shape would have to be different from spherical. Thus, naively, the EDM would quantify the “dumbbellness” of a macroscopic object.

Electric dipole moment in the quantum world

The story of EDM, however, is very different in the quantum world. There the vacuum around an electron is not empty and still. Rather it is populated by various subatomic particles zapping into virtual existence for short periods of time.

The Standard Model of particle physics has correctly predicted all of these particles. If the ACME experiment discovered that the electron had an EDM, it would suggest there were other particles that had not yet been discovered.Designua/Shutterstock.com

These virtual particles form a “cloud” around an electron. If we shine light onto the electron, some of the light could bounce off the virtual particles in the cloud instead of the electron itself.

This would change the numerical values of the electron’s charge and magnetic and electric dipole moments. Performing very accurate measurements of those quantum properties would tell us how these elusive virtual particles behave when they interact with the electron and if they alter the electron’s EDM.

Most intriguing, among those virtual particles there could be new, unknown species of particles that we have not yet encountered. To see their effect on the electron’s electric dipole moment, we need to compare the result of the measurement to theoretical predictions of the size of the EDM calculated in the currently accepted theory of the Universe, the Standard Model.

So far, the Standard Model accurately described all laboratory measurements that have ever been performed. Yet, it is unable to address many of the most fundamental questions, such as why matter dominates over antimatter throughout the universe. The Standard Model makes a prediction for the electron’s EDM too: it requires it to be so small that ACME would have had no chance of measuring it. But what would have happened if ACME actually detected a non-zero value for the electric dipole moment of the electron?

View of the Large Hadron Collider in its tunnel near Geneva, Switzerland. In the LHC two counter-rotating beams of protons are accelerated and forced to collide, generating various particles.AP Photo/KEYSTONE/Martial Trezzini

Patching the holes in the Standard Model

Theoretical models have been proposed that fix shortcomings of the Standard Model, predicting the existence of new heavy particles. These models may fill in the gaps in our understanding of the universe. To verify such models we need to prove the existence of those new heavy particles. This could be done through large experiments, such as those at the international Large Hadron Collider (LHC) by directly producing new particles in high-energy collisions.

Alternatively, we could see how those new particles alter the charge distribution in the “cloud” and their effect on electron’s EDM. Thus, unambiguous observation of electron’s dipole moment in ACME experiment would prove that new particles are in fact present. That was the goal of the ACME experiment.

This is the reason why a recent article in Nature about the electron caught my attention. Theorists like myself use the results of the measurements of electron’s EDM – along with other measurements of properties of other elementary particles – to help to identify the new particles and make predictions of how they can be better studied. This is done to clarify the role of such particles in our current understanding of the universe.

What should be done to measure the electric dipole moment? We need to find a source of very strong electric field to test an electron’s reaction. One possible source of such fields can be found inside molecules such as thorium monoxide. This is the molecule that ACME used in their experiment. Shining carefully tuned lasers at these molecules, a reading of an electron’s electric dipole moment could be obtained, provided it is not too small.

However, as it turned out, it is. Physicists of the ACME collaboration did not observe the electric dipole moment of an electron – which suggests that its value is too small for their experimental apparatus to detect. This fact has important implications for our understanding of what we could expect from the Large Hadron Collider experiments in the future.

Interestingly, the fact that the ACME collaboration did not observe an EDM actually rules out the existence of heavy new particles that could have been easiest to detect at the LHC. This is a remarkable result for a tabletop-sized experiment that affects both how we would plan direct searches for new particles at the giant Large Hadron Collider, and how we construct theories that describe nature. It is quite amazing that studying something as small as an electron could tell us a lot about the universe.

A short animation describing the physics behind EDM and ACME collaboration’s findings.

December 21, 2018

Every business requires a steady stream of clients to succeed. The world is now doing everything online; from taking classes and shopping, to finding services online. This has been seen the rise of digital marketing to boost sales by targeting online customers.

While you can find customers by offering services, such as free spinal cord exams to local clients, online patients will remain unreached. As a result, digital marketing is crucial. If you have not started marketing digitally, here are five strategies to generate more leads online.

Your website is your main marketing tool in reaching the online audience. It gives you an online presence. And, it will only make sense to invest in a good website — one that is visually appealing while at the same time functions well.

The website design should be suitable enough to attract your target audience and make you stand out from your competitors.

2.Search Engine Optimization (SEO)

SEO is an effective marketing strategy that helps clients to find you by searching on Google. One of the significant factors in conducting SEO is the use of relevant keywords that patients use when entering queries on search engines.

Creating appropriate content will also go a long way in gaining you favorable ranking. You don’t have to limit your topics on your products and services. Talk about trend in your industry, too. Anything that will be relevant to your target audience.

3. Social Media

Social media platforms are a good place to start building your online presence. People are always seeking recommendations, and social media offers an opportunity to get referrals. Some people even get the information they need from social media now, not search engines.

Brand your profiles in a similar way to boost recognition and share content regularly. Add social media buttons to your website to encourage people to share. Another tip is to create a social media personality for your brand that is approachable and fun, so people can trust your business.

4. Online Directories

It is not enough to have a website and social media platforms. You need to be on online directories as well. Remember that it is in your best intention to cover as much digital ground as possible.

Popular online directories such as Google, Yahoo! and Yelp list businesses. The listings enable people to search businesses based on location. They can also access reviews left by other clients. Ensure that all your information is filled accurately and work toward getting many positive reviews.

5. Mobile friendly site

Most people use mobile devices to access the internet. Make your website mobile-friendly for easy view and navigation. Besides, Google ranks websites that are mobile user-friendly better.

By optimizing your website using SEO and social media among other strategies, you can earn more online leads and boost the success of your business.

This project was motivated because we think that the W (and its sibling, the Z boson) are actually more complicated than usually assured. We think that they may have a self-similar structure. The bits and pieces of this is quite technical. But the outline is the following: What we see and measure as a W at, say, the LHC or earlier, is actually not a point-like particle. Although this is the currently most common view. But science has always been about changing the common ideas and replacing them with something new and better. So, our idea is that the W has a substructure. This substructure is a bit weird, because it is not made from additional elementary particles. It rather looks like a bubbling mess of quantum effects. Thus, we do not expect that we can isolate anything which resembles a physical particle within the W. And if we try to isolate something, we should not expect it to behave as a particle.

Thus, this scenario gives two predictions. One: Substructure needs to have space somewhere. Thus, the W should have a size. Two: Anything isolated from it should not behave like a particle. To test both ideas in the same way, we decided to look at the same quantity: The radius. Hence, we simulated a part of the standard model. Then we measured the size of the W in this simulation. Also, we tried to isolate the most particle-like object from the substructure, and also measured its size. Both of these measurements are very expensive in terms of computing time. Thus, our results are rather exploratory. Hence, we cannot yet regard what we found as final. But at least it gives us some idea of what is going on.

The first thing is the size of the W. Indeed, we find that it has a size, and one which is not too small either. The number itself, however, is far less accurate. The reason for this is twofold. On the one hand, we have only a part of the standard model in our simulations. On the other hand, we see artifacts. They come from the fact that our simulations can only describe some finite part of the world. The larger this part is, the more expensive the calculation. With what we had available, the part seems to be still so small that the W is big enough to 'bounce of the walls' fairly often. Thus, our results still show a dependence on the size of this part of the world. Though we try to accommodate for this, this still leaves a sizable uncertainty for the final result. Nonetheless, the qualitative feature that it has a significant size remains.

The other thing are the would-be constituents. We indeed can identify some kind of lumps of quantum fluctuations inside. But indeed, they do not behave like a particle, not even remotely. Especially, when trying to measure their size, we find that the square of their radius is negative! Even though the final value is still uncertain, this is nothing a real particle should have. Because when trying to take the square root of such a negative quantity to get the actual number yields an imaginary number. That is an abstract quantity, which, while not identifiable with anything in every day, has a well-defined mathematical meaning. In the present case, this means this lump is nonphysical, as if you would try to upend a hole. Thus, this mess is really not a particle at all, in any conventional sense of the word. Still, what we could get from this is that such lumps - even though they are not really lumps, 'live' only in areas of our W much smaller than the W size. So, at least they are contained. And let the W be the well-behaved particle it is.

So, the bottom line is, our simulations agreed with our ideas. That is good. But it is not enough. After all, who can tell if what we simulate is actually the thing happening in nature? So, we will need an experimental test of this result. This is surprisingly complicated. After all, you cannot really get a measure stick to get the size of a particle. Rather, what you do is, you throw other particles at them, and then see how much they are deflected. At least in principle.

Can this be done for the W? Yes, it can be done, but is very indirect. Essentially, it could work as follows: Take the LHC, at which two protons are smashed in each other. In this smashing, it is possible that a Z boson is produced, which smashes of a W. So, you 'just' need to look at the W before and after. In practice, this is more complicated. Since we cannot send the W in there to hit the Z, we use that mathematically this process is related to another one. If we get one, we get the other for free. This process is that the produced Z, together with a lot of kinetic energy, decays into two W particles. These are then detected, and their directions measured.

As nice as this sounds, this is still horrendously complicated. The problem is that the Ws themselves decay into some leptons and neutrinos before they reach the actual detector. And because neutrinos escape essentially always undetected, one can only indirectly infer what has been going on. Especially the directions of the Ws cannot easily be reconstructed. Still, in principle it should be possible, and we discuss this in our paper. So we can actually measure this size in principle. It will be now up to the experimental experts if it can - and will - be done in practice.

December 07, 2018

There is always a great sense of satisfaction on the last day of the teaching semester. That great moment on a Friday afternoon when the last lecture is over, the last presentation is marked, and the term’s teaching materials can be transferred from briefcase to office shelf. I’m always tempted to jump in the car, and drive around the college carpark beeping madly. Of course there is the small matter of marking, from practicals, assignments and assessments to the end-of-semester exams, but that’s a very different activity!

The last day of term at WIT

For me, the semesterisation of teaching is one of the best aspects of life as an academic. I suppose it’s the sense of closure, of things finished – so different from research, where one paper just leads to another in a never-ending cycle. There never seems to be a good moment for a pause in the world of research, just a ton of papers I would like to write if I had the time.

The reason for this is quite simple – teaching. On top of my usual lecturing duties, I had to prepare and deliver a module in 4th-year particle physics this term. It was a very interesting experience and I learnt a lot, but preparing the module took up almost every spare moment of my time, nuking any chances of doing any meaningful research during the teaching term. And now I hear that I will be involved in the delivery of yet another new module next semester, oh joy.

This has long been my problem with the Institutes of Technology. With contact hours set at a minimum of 16 hours/week, there is simply far too much teaching (a situation that harks back to a time when lecturers taught to Diploma level only). While the high-ups in education in our capital city make noises about the importance of research and research-led teaching, they refuse to countenance any change in this for research-active staff in the IoTs. If anything, one has the distinct impression everyone would much rather we didn’t bother. I don’t expect this situation to change anytime soon – in all the talk about technological universities, I have yet to hear a single mention of new lecturer contracts.

After a conversation over coffee with one of the event planners over at the Natural History Museum, I had an idea and wandered over to talk to Angella Johnson (no relation), our head of demo labs. Within seconds we were looking at some possible props I might use in an event at the NHM in February. Will tell you more about it later!

November 30, 2018

“Science is always political,” asserted a young delegate at an international conference on the history of physics earlier this month. It was a very enjoyable meeting, but I noticed the remark caused a stir among many of the physicists in the audience.

In truth, the belief that the practice of science is never entirely free of politics has been a steady theme of historical scholarship for some years now, as can be confirmed by a glance at any scholarly journal on the history of science. At a conference specifically designed to encourage interaction between scientists, historians and sociologists of science, it was interesting to see a central tenet of modern scholarship openly questioned.

Famous debate

Where does the idea come from? A classic example of the hypothesis can be found in the book Leviathan and the Air-Pump by Steven Shapin and Simon Schaffer. In this highly influential work, the authors considered the influence of the politics of the English civil war and the restoration on the famous debate between scientist Robert Boyle and philosopher Thomas Hobbesconcerning the role of experimentation in science. More recently, many American historians of science have suggested that much of the success of 20th century American science, from aeronautics to particle physics, was driven by the politics of the cold war.

Similarly, there is little question that CERN, the famous inter-European particle physics laboratory at Geneva, was constructed to stem the brain-drain of European physicists to the United States after the second World War. CERN has proved itself many times over as an outstanding example of successful international scientific collaboration, although Ireland has yet to join.

But do such examples imply that science is always influenced by politics? Some scientists and historians doubt this assertion. While one can see how a certain field or technology might be driven by national or international political concerns, the thesis seems less tenable when one considers basic research. In what way is the study of the expanding universe influenced by politics? Surely the study of the elementary particles is driven by scientific curiosity?

It seems to me that this internally-driven aspect of scientific research is sometimes overlooked by historians and sociologists of science. By ignoring the technical aspects of a given field, scholars sometimes miss the fact that a development followed naturally from what went before. The progression is rarely linear, but it is not random either.

Speculation

In addition, it is difficult to definitively prove a link between politics and a given scientific advance – such assertions involve a certain amount of speculation. For example, it is interesting to note that many of the arguments in Leviathan have been seriously questioned, although these criticisms have not received the same attention as the book itself.

That said, few could argue that research into climate science in the United States suffered many setbacks during the presidency of George W Bush, and a similar situation pertains now. But the findings of American climate science are no less valid than they were at other time and the international character of scientific enquiry ensures a certain objectivity and continuity of research. Put bluntly, there is no question that resistance to the findings of climate science is often politically motivated, but there is little evidence that climate science itself is political.

Another factor concerns the difference between the development of a given field and the dawning of an entirely new field of scientific inquiry. In a recent New York Times article titled “How politics shaped general relativity”, the American historian of science David Kaiser argued convincingly for the role played by national politics in the development of Einstein’s general theory of relativity in the United States. However, he did not argue that politics played a role in the original gestation of the theory – most scientists and historians would agree that Einstein’s quest was driven by scientific curiosity.

All in all, I think there is a danger of overstating the influence of politics on science. While national and international politics have an impact on every aspect our lives, the innate drive of scientific progress should not be overlooked. Advances in science are generally propelled by the engine of internal logic, by observation, hypothesis and theory-testing. No one is immune from political upheaval, but science has a way of weeding out incorrect hypotheses over time.

For a change of pace this year, I went to Twitter and asked for suggestions for what to give thanks for in this annual post. There were a number of good suggestions, but two stood out above the rest: @etandel suggested Noether’s Theorem, and @OscarDelDiablo suggested the moons of Jupiter. Noether’s Theorem, according to which symmetries imply conserved quantities, would be a great choice, but in order to actually explain it I should probably first explain the principle of least action. Maybe some other year.

And to be precise, I’m not going to bother to give thanks for all of Jupiter’s moons. 78 Jovian satellites have been discovered thus far, and most of them are just lucky pieces of space debris that wandered into Jupiter’s gravity well and never escaped. It’s the heavy hitters — the four Galilean satellites — that we’ll be concerned with here. They deserve our thanks, for at least three different reasons!

Reason One: Displacing Earth from the center of the Solar System

Galileo discovered the four largest moons of Jupiter — Io, Europa, Ganymede, and Callisto — back in 1610, and wrote about his findings in Sidereus Nuncius (The Starry Messenger). They were the first celestial bodies to be discovered using that new technological advance, the telescope. But more importantly for our present purposes, it was immediately obvious that these new objects were orbiting around Jupiter, not around the Earth.

All this was happening not long after Copernicus had published his heliocentric model of the Solar System in 1543, offering an alternative to the prevailing Ptolemaic geocentric model. Both models were pretty good at fitting the known observations of planetary motions, and both required an elaborate system of circular orbits and epicycles — the realization that planetary orbits should be thought of as ellipses didn’t come along until Kepler published Astronomia Nova in 1609. As everyone knows, the debate over whether the Earth or the Sun should be thought of as the center of the universe was a heated one, with the Roman Catholic Church prohibiting Copernicus’s book in 1616, and the Inquisition putting Galileo on trial in 1633.

Strictly speaking, the existence of moons orbiting Jupiter is equally compatible with a heliocentric or geocentric model. After all, there’s nothing wrong with thinking that the Earth is the center of the Solar System, but that other objects can have satellites. However, the discovery brought about an important psychological shift. Sure, you can put the Earth at the center and still allow for satellites around other planets. But a big part of the motivation for putting Earth at the center was that the Earth wasn’t “just another planet.” It was supposed to be the thing around which everything else moved. (Remember that we didn’t have Newtonian mechanics at the time; physics was still largely an Aristotelian story of natures and purposes, not a bunch of objects obeying mindless differential equations.)

The Galilean moons changed that. If other objects have satellites, then Earth isn’t that special. And if it’s not that special, why have it at the center of the universe? Galileo offered up other arguments against the prevailing picture, from the phases of Venus to mountains on the Moon, and of course once Kepler’s ellipses came along the whole thing made much more mathematical sense than Ptolemy’s epicycles. Thus began one of the great revolutions in our understanding of our place in the cosmos.

Reason Two: Measuring the speed of light

Time is what clocks measure. And a clock, when you come right down to it, is something that does the same thing over and over again in a predictable fashion with respect to other clocks. That sounds circular, but it’s a nontrivial fact about our universe that it is filled with clocks. And some of the best natural clocks are the motions of heavenly bodies. As soon as we knew about the moons of Jupiter, scientists realized that they had a new clock to play with: by accurately observing the positions of all four moons, you could work out what time it must be. Galileo himself proposed that such observations could be used by sailors to determine their longitude, a notoriously difficult problem.

Danish astronomer Ole Rømer noted a puzzle when trying to use eclipses of Io to measure time: despite the fact that the orbit should be an accurate clock, the actual timings seemed to change with the time of year. Being a careful observational scientist, he deduced that the period between eclipses was longer when the Earth was moving away from Jupiter, and shorter when the two planets were drawing closer together. An obvious explanation presented itself: the light wasn’t traveling instantaneously from Jupiter and Io to us here on Earth, but rather took some time. By figuring out exactly how the period between eclipses varied, we could then deduce what the speed of light must be.

Rømer’s answer was that light traveled at about 220,000 kilometers per second. That’s pretty good! The right answer is 299,792 km/sec, about 36% greater than Rømer’s value. For comparison purposes, when Edwin Hubble first calculated the Hubble constant, he derived a value of about 500 km/sec/Mpc, whereas now we know the right answer is about 70 km/sec/Mpc. Using astronomical observations to determine fundamental parameters of the universe isn’t easy, especially if you’re the first one to to it.

Reason Three: Looking for life

Here in the present day, Jupiter’s moons have not lost their fascination or importance. As we’ve been able to study them in greater detail, we’ve learned a lot about the history and nature of the Solar System more generally. And one of the most exciting prospects is that one or more of these moons might harbor life.

It used to be common to think about the possibilities for life outside Earth in terms of a “habitable zone,” the region around a star where temperatures allowed planets to have liquid water. (Many scientists think that liquid water is a necessity for life to exist — but maybe we’re just being parochial about that.) In our Solar System, Earth is smack-dab in the middle of the habitable zone, and Mars just sneaks in. Both Venus and Jupiter are outside, on opposite ends.

But there’s more than one way to have liquid water. It turns out that both Europa and Ganymede, as well as Saturn’s moons Titan and Enceladus, are plausible homes for large liquid oceans. Europa, in particular, is thought to possess a considerable volume of liquid water underneath an icy crust — approximately two or three times as much water as in all the oceans on Earth. The point is that solar radiation isn’t the only way to heat up water and keep it at liquid temperatures. On Europa, it’s likely that heat is generated by the tidal pull from Jupiter, which stretches and distorts the moon’s crust as it rotates.

Does that mean there could be life there? Maybe! Nobody really knows. Smart money says that we’re more likely to find life on a wet environment like Europa than a dry one like Mars. And we’re going to look — the Europa Clipper mission is scheduled for launch by 2025.

If you can’t wait for then, go back and watch the movie Europa Report. And while you do, give thanks to Galileo and his discovery of these fascinating celestial bodies.

November 16, 2018

I'm late to the party. Yes, I use the word party, because the outpouring of commentary noting the passing of Stan Lee has been, rightly, marked with a sense of celebration of his contributions to our culture. Celebration of a life full of activity. In the spirit of a few of the "what were you doing when you heard..." stories I've heard, involving nice coincidences and ironies, I've got one of my own. I'm not exactly sure when I heard the announcement on Monday, but I noticed today that it was also on Monday that I got an email giving me some news* about the piece I wrote about the Black Panther earlier this year for the publication The Conversation. The piece is about the (then) pending big splash the movie about the character (co-created by Stan Lee in the 60s) was about to make in the larger culture, the reasons for that, and why it was also a tremendous opportunity for science. For science? Yes, because, as I said there:

Vast audiences will see black heroes of both genders using their scientific ability to solve problems and make their way in the world, at an unrivaled level.

and

Improving science education for all is a core endeavor in a nation’s competitiveness and overall health, but outcomes are limited if people aren’t inspired to take an interest in science in the first place. There simply are not enough images of black scientists – male or female – in our media and entertainment to help inspire. Many people from underrepresented groups end up genuinely believing that scientific investigation is not a career path open to them.

Moreover, many people still see the dedication and study needed to excel in science as “nerdy.” A cultural injection of Black Panther heroics could help continue to erode the crumbling tropes that science is only for white men or reserved for people with a special “science gene.”

And here we are many months later, and I was delighted to see that people did get a massive dose of science inspiration from T'Challa and his sister Shuri, and the whole of the Wakanda nation, not just in Black Panther, but also in the Avengers: Infinity War movie a short while after.

But my larger point here is that so much of this goes back to Stan Lee's work with collaborators in not just making "relatable" superheroes, as you've heard said so many times --- showing their flawed human side so much more than the dominant superhero trope (represented by Superman, Wonder Woman, Batman, etc.,) allowed for at the time -- but making science and scientists be at the forefront of much of it. So many of the characters either were scientists (Banner (Hulk), Richards (Mr.Fantastic), T'Challa (BlackPanther), Pym (Ant Man), Stark (Ironman), etc) or used science actively to solve problems (e.g. Parker/Spiderman).

November 04, 2018

Today marks the end of the mid-term break for many of us in the third level sector in Ireland. While a non-teaching week in the middle of term has been a stalwart of secondary schools for many years, the mid-term break only really came to the fore in the Irish third level sector when our universities, Institutes of Technology (IoTs) and other colleges adopted the modern model of 12-week teaching semesters.

Also known as ‘reading week’ in some colleges, the break marks a precious respite in the autumn/winter term. A chance to catch one’s breath, a chance to prepare teaching notes for the rest of term and a chance to catch up on research. Indeed, it is the easiest thing in the world to let the latter slide during the teaching term – only to find that deadlines for funding, book chapters and conference abstracts quietly slipped past while one was trying to keep up with teaching and administration duties.

A quiet walk in Foxrock on the last day of the mid-term break

Which brings me to a pet peeve. All those years later, teaching loads in the IoT sector remain far too high. Lecturers are typically assigned four teaching modules per semester, a load that may have been reasonable in the early days of teaching to Certificate and Diploma level, but makes little sense in the context of today’s IoT lecturer who may teach several modules at 3rd and 4th year degree level, with typically at least one brand new module each year – all of this whilst simultaneously attempting to keep up the research. It’s a false economy if ever there was one, as many a new staff member, freshly graduated from a top research group, will simply abandon research after a few busy years.

Of course, one might have expected to hear a great deal about this issue in the governments plan to ‘upgrade’ IoTs to technological university status. Actually, I have yet to see any public discussion of a prospective change in the teaching contracts of IoT lecturers – a question of money, no doubt. But this is surely another indication that we are talking about a change in name, rather than substance…

November 01, 2018

Maybe a decade or so ago* I made a Halloween costume which featured this simple mask decorated with symbols. “The scary face of science” I called it, mostly referring to people’s irrational fear of mathematics. I think I was being ironic. In retrospect, I don’t think it was funny at all.

Coleman on GHZS

My background is the talk "Quantum Mechanics In Your Face" by Sidney Coleman which I consider as the best argument why quantum mechanics cannot be described by a local and realistic theory (from which I would conclude it is not realistic). In a nutshell, the argument goes like this: Consider the three qubit state state

which is both an eigenstate of eigenvalue -1 for $\sigma_z\otimes\sigma_z\otimes\sigma_z$ and an eigenstate of eigenvalue +1 for $\sigma_x\otimes\sigma_x\otimes\sigma_z$ or any permutation. This means that, given that the individual outcomes of measuring a $\sigma$-matrix on a qubit is $\pm 1$, when measuring all in the z-direction there will be an odd number of -1 results but if two spins are measured in x-direction and one in z-direction there is an even number of -1's.

The latter tells us that the outcome of one z-measurement is the product of the two x-measurements on the other two spins. But multiplying this for all three spins we get that in shorthand $ZZZ=(XXX)^2=+1$ in contradiction to the -1 eigenvalue for all z-measurments.

The conclusion is (unless you assume some non-local conspiracy between the spins) that one has to take serious the fact that on a given spin I cannot measure both $\sigma_x$ and $\sigma_z$ and thus when actually measuring the latter I must not even assume that $X$ has some (although unknown) value $\pm 1$ as it leads to the contradiction. Stuff that I cannot measure does not have a value (that is also my understanding of what "not realistic" means).

Fruchtiger and Renner

Now to the recent Nature paper. In short, they are dealing with two qubits (by which I only mean two state systems). The first is in a box L' (I will try to use the somewhat unfortunate nomenclature from the paper) and the second in in a box L (L stands for lab). For L, we use the usual z-basis of $\uparrow$ and $\downarrow$ as well as the x-basis $\leftarrow = \frac 1{\sqrt 2}(\downarrow - \uparrow)$ and $\rightarrow = \frac 1{\sqrt 2}(\downarrow + \uparrow)$ . Similarly, for L' we use the basis $h$ and $t$ (heads and tails as it refers to a coin) as well as $o = \frac 1{\sqrt 2}(h - t)$ and $f = \frac 1{\sqrt 2}(h+f)$. The two qubits are prepared in the state

From this once can conclude that measuring $o$ for the state of L' one can conclude that L is in the state $\uparrow$. Call this observation C.

Using now C, B and A one is tempted to conclude that observing L' to be in state $o$ implies that L is in state $\rightarrow$. When we express the state in the $ht\leftarrow\rightarrow$-basis, however, we get

so with probability 1/12 we find both $o$ and $\leftarrow$. Again, we hit a contradiction.

One is tempted to use the same way out as above in the three qubit case and say one should not argue about contrafactual measurements that are incompatible with measurements that were actually performed. But Frauchiger and Renner found a set-up which seems to avoid that.

They have observers F and F' ("friends") inside the boxes that do the measurements in the $ht$ and $\uparrow\downarrow$ basis whereas later observers W and W' measure the state of the boxes including the observer F and F' in the $of$ and $\leftarrow\rightarrow$ basis. So, at each stage of A,B,C the corresponding measurement has actually taken place and is not contrafactual!

Interference and it did not happen

I believe the way out is to realise that at least from a retrospective perspective, this analysis stretches the language and in particular the word "measurement" to the extreme. In order for W' to measure the state of L' in the $of$-basis, he has to interfere the contents including F' coherently such that there is no leftover of information from F''s measurement of $ht$ remaining. Thus, when W''s measurement is performed one should not really say that F''s measurement has in any real sense happened as no possible information is left over. So it is in any practical sense contrafactual.

To see the alternative, consider a variant of the experiment where a tiny bit of information (maybe the position of one air molecule or the excitation of one of F''s neutrons) escapes the interference. Let's call the two possible states of that qubit of information $H$ and $T$ (not necessarily orthogonal) and consider instead the state where that neutron is also entangled with the first qubit:

We see that now there is a term containing $o\otimes\downarrow\otimes(H-T)$. Thus, as long as the two possible states of the air molecule/neuron are actually different, observation C is no longer valid and the whole contradiction goes away.

This makes it clear that the whole argument relies of the fact that when W' is doing his measurement any remnant of the measurement by his friend F' is eliminated and thus one should view the measurement of F' as if it never happened. Measuring L' in the $of$-basis really erases the measurement of F' in the complementary $ht$-basis.

October 24, 2018

This time, I want to continue the discussion from some months ago. Back then, I was rather general on how we could test our most dramatic idea. This idea is connected to what we regard as elementary particles. So far, our idea is that those you have heard about, the electrons, the Higgs, and so on are truly the basic building blocks of nature. However, we have found a lot of evidence that indicate that we see in experiment, and call these names, are actually not the same as the elementary particles themselves. Rather, they are a kind of bound state of the elementary ones, which only look at first sight like they themselves would be the elementary ones. Sounds pretty weird, huh? And if it sounds weird, it means it needs to be tested. We did so with numerical simulations. They all agreed perfectly with the ideas. But, of course, its physics, and thus we need also an experiment. The only question is which one.

We had some ideas already a whileback. One of them will be ready soon, and I will talk again about it in due time. But this will be rather indirect, and somewhat qualitative. The other, however, required a new experiment, which may need two more decades to build. Thus, both cannot be the answer alone, and we need something more.

And this more is what we are currently closing in. Because one has this kind of weird bound state structure to make the standard model consistent, not only exotic particles are more complicated than usually assumed. Ordinary ones are too. And most ordinary are protons, the nucleus of the hydrogen atom. More importantly, protons is what is smashed together at the LHC at CERN. So, we have a machine already, which may be able to test it. But this is involved, as protons are very messy. They are already in the conventional picture bound states of quarks and gluons. Our results just say there are more components. Thus, we have somehow to disentangle old and new components. So, we have to be very careful in what we do.

Fortunately, there is a trick. All of this revolves around the Higgs. The Higgs has the property that interacts stronger with particles the heavier they are. The heaviest particles we know are the top quark, followed by the W and Z bosons. And the CMS experiment (and other experiments) at CERN has a measurement campaign to look at the production of these particles together! That is exactly where we expect something interesting can happen. However, our ideas are not the only ones leading to top quarks and Z bosons. There are many known processes which produce them as well. So we cannot just check whether they are there. Rather, we need to understand if there are there as expected. E.g., if they fly away from the interaction in the expected direction and with the expected speeds.

So what a master student and myself do is the following. We use a program, called HERWIG, which simulates such events. One of the people who created this program helped us to modify this program, so that we can test our ideas with it. What we now do is rather simple. An input to such simulations is how the structure of the proton looks like. Based on this, it simulates how the top quarks and Z bosons produced in a collision are distributed. We now just add our conjectured additional contributions to the proton, essentially a little bit of Higgs. We then check, how the distributions change. By comparing the changes to what we get in experiment, we can then deduced how large the Higgs contribution in the proton is. Moreover, we can even indirectly deduce its shape, i.e. how in the proton the Higgs is located.

And this we now study. We iterate modifications of the proton structure with comparison to experimental results and predictions without this Higgs contribution. Thereby, we constraint the Higgs contribution in the proton bit by bit. At the current time, we know that the data is only sufficient to provide an upper bound to this amount inside the proton. Our first estimates show already that this bound is actually not that strong, and quite a lot of Higgs could be inside the proton. But on the other hand, this is good, because that means that the expected data in the next couple of years from the experiments will be able to actually either constraint the contribution further, or could even detect it, if it is large enough. At any rate, we now know that we have a sensitive leverage to understand this new contribution.

October 17, 2018

Last Sunday, we had the election for the federal state of Bavaria. Since the electoral system is kind of odd (but not as odd as first past the post), I would like to analyse how some variations (assuming the actual distribution of votes) in the rule would have worked out. So, first, here is how actually, the seats are distributed: Each voter gets two ballots: On the first ballot, each party lists one candidate from the local constituency and you can select one. On the second ballot, you can vote for a party list (it's even more complicated because also there, you can select individual candidates to determine the position on the list but let's ignore that for today).

Then in each constituency, the votes on ballot one are counted. The candidate with the most votes (like in first past the pole) gets elected for parliament directly (and is called a "direct candidate"). Then over all, the votes for each party on both ballots (this is where the system differs from the federal elections) are summed up. All votes for parties with less then 5% of the grand total of all votes are discarded (actually including their direct candidates but this is not of a partial concern). Let's call the rest the "reduced total". According to the fraction of each party in this reduced total the seats are distributed.

Of course the first problem is that you can only distribute seats in integer multiples of 1. This is solved using the Hare-Niemeyer-method: You first distribute the integer parts. This clearly leaves fewer seats open than the number of parties. Those you then give to the parties where the rounding error to the integer below was greatest. Check out the wikipedia page explaining how this can lead to a party losing seats when the total number of seats available is increased.

Because this is what happens in the next step: Remember that we already allocated a number of seats to constituency winners in the first round. Those count towards the number of seats that each party is supposed to get in step two according to the fraction of votes. Now, it can happen, that a party has won more direct candidates than seats allocated in step two. If that happens, more seats are added to the total number of seats and distributed according to the rules of step two until each party has been allocated at least the number of seats as direct candidates. This happens in particular if one party is stronger than all the other ones leading to that party winning almost all direct candidates (as in Bavaria this happened to the CSU which won all direct candidates except five in Munich and one in Würzburg which were won by the Greens).

A final complication is that Bavaria is split into seven electoral districts and the above procedure is for each district separately. So there are seven times rounding and adding seats procedures.

Now, for example one can calculate the distribution without districts throwing just everything in a single super-district. Then there are 208 seats distributed as

CSU 85 (40.8%)

SPD 22 (10.6%)

FW 26 (12.5%)

GREENS 40 (19.2%)

FDP 12 (5.8%)

AFD 23 (11.1%)

You can see that in particular the CSU, the party with the biggest number of votes profits from doing the rounding 7 times rather than just once and the last three parties would benefit from giving up districts.

But then there is actually an issue of negative weight of votes: The greens are particularly strong in Munich where they managed to win 5 direct seats. If instead those seats would have gone to the CSU (as elsewhere), the number of seats for Oberbayern, the district Munich belongs to would have had to be increased to accommodate those addition direct candidates for the CSU increasing the weight of Oberbayern compared to the other districts which would then be beneficial for the greens as they are particularly strong in Oberbayern: So if I give all the direct candidates to the CSU (without modifying the numbers of total votes), I get the follwing distribution:

221 seats

CSU 91 (41.2%)

SPD 24 (10.9%)

FW 28 (12,6%)

GREENS 42 (19.0%)

FDP 12 (5.4%)

AFD 24 (10.9%)

That is, there greens would have gotten a higher fraction of seats if they had won less constituencies. Voting for green candidates in Munich actually hurt the party as a whole!

The effect is not so big that it actually changes majorities (CSU and FW are likely to form a coalition) but still, the constitutional court does not like (predictable) negative weight of votes. Let's see if somebody challenges this election and what that would lead to.

Postscript:The above analysis in the last point is not entirely fair as not to win a constituency means getting fewer votes which then are missing from the grand total. Taking this into account makes the effect smaller. In fact, subtracting the votes from the greens that they were leading by in the constituencies they won leads to an almost zero effect:

Seats: 220

CSU 91 41.4%

SPD 24 10.9%

FW 28 12.7%

GREENS 41 18.6%

FDP 12 5.4%

AFD 24 10.9%

Letting the greens win München Mitte (a newly created constituency that was supposed to act like a bad bank for the CSU taking up all central Munich more left leaning voters, do I hear somebody say "Gerrymandering"?) yields

Seats: 217

CSU 90 41.5%

SPD 23 10.6%

FW 28 12.9%

GREENS 41 18.9%

FDP 12 5.5%

AFD 23 10.6%

Or letting them win all but Moosach and Würzbug-Stadt where the lead was the smallest:

October 15, 2018

And then two come along at once... Following on yesterday, another of the longer interviews I've done recently has appeared. This one was for Sean Carroll's excellent Mindscape podcast. This interview/chat is all about string theory, including some of the core ideas, its history, what that "quantum gravity" thing is anyway, and why it isn't actually a theory of (just) strings. Here's a direct link to the audio, and here's a link to the page about it on Sean's blog.

The whole Mindscape podcast has had some fantastic conversations, by the way, so do check it out on iTunes or your favourite podcast supplier!

September 27, 2018

The history of physics is full of stuff developed for one purpose ending up being useful for an entirely different purpose. Quite often they also failed their original purpose miserably, but are paramount for the new one. Newer examples are the first attempts to describe the weak interactions, which ended up describing the strong one. Also, string theory was originally invented for the strong interactions, and failed for this purpose. Now, well, it is the popular science star, and a serious candidate for quantum gravity.

But failing is optional for having a second use. And we just start to discover a second use for our investigations of grand-unified theories. There our research used a toy model. We did this, because we wanted to understand a mechanism. And because doing the full story would have been much too complicated before we did not know, whether the mechanism works. But it turns out this toy theory may be an interesting theory on its own.

And it may be interesting for a very different topic: Dark matter. This is a hypothetical type of matter of which we see a lot of indirect evidence in the universe. But we are still mystified of what it is (and whether it is matter at all). Of course, such mysteries draw our interests like a flame the moth. Hence, our group in Graz starts to push also in this direction, being curious on what is going on. For now, we follow the most probable explanation that there are additional particles making up dark matter. Then there are two questions: What are they? And do they, and if yes how, interact with the rest of the world? Aside from gravity, of course.

Next week I will go to a workshop in which new ideas on dark matter will be explored, to get a better understanding of what is known. And in the course of preparing for this workshop I noted that there is this connection. I will actually present this idea at the workshop, as it forms a new class of possible explanations of dark matter. Perhaps not the right one, but at the current time an equally plausible one as many others.

And here is how it works. Theories of the type of grand-unified theories were for a long time expected to have a lot of massless particles. This was not bad for their original purpose, as we know quite some of them, like the photon and the gluons. However, our results showed that with an improved treatment and shift in paradigm that this is not always true. At least some of them do not have massless particles.

But dark matter needs to be massive to influence stars and galaxies gravitationally. And, except for very special circumstances, there should not be additional massless dark particles. Because otherwise the massive ones could decay into the massless ones. And then the mass is gone, and this does not work. Thus the reason why such theories had been excluded. But with our new results, they become feasible. Even more so, we have a lot of indirect evidence that dark matter is not just a single, massive particle. Rather, it needs to interact with itself, and there could be indeed many different dark matter particles. After all, if there is dark matter, it makes up four times more stuff in the universe than everything we can see. And what we see consists out of many particles, so why should not dark matter do so as well. And this is also realized in our model.

And this is how it works. The scenario I will describe (you can download my talk already now, if you want to look for yourself - though it is somewhat technical) finds two different types of stable dark matter. Furthermore, they interact. And the great thing about our approach is that we can calculate this quite precisely, giving us a chance to make predictions. Still, we need to do this, to make sure that everything works with what astrophysics tells us. Moreover, this setup gives us two more additional particles, which we can couple to the Higgs through a so-called portal. Again, we can calculate this, and how everything comes together. This allows to test this model not only by astronomical observations, but at CERN. This gives the basic idea. Now, we need to do all the detailed calculations. I am quite excited to try this out :) - so stay tuned, whether it actually makes sense. Or whether the model will have to wait for another opportunity.

September 25, 2018

Sir Michael Atiyah, one of the world’s greatest living mathematicians, has proposed a derivation of α, the fine-structure constant of quantum electrodynamics. A preprint is here. The math here is not my forte, but from the theoretical-physics point of view, this seems misguided to me.

Caveat: Michael Atiyah is a smart cookie and has accomplished way more than I ever will. It’s certainly possible that, despite the considerations I mention here, he’s somehow onto something, and if so I’ll join in the general celebration. But I honestly think what I’m saying here is on the right track.

In quantum electrodynamics (QED), α tells us the strength of the electromagnetic interaction. Numerically it’s approximately 1/137. If it were larger, electromagnetism would be stronger, atoms would be smaller, etc; and inversely if it were smaller. It’s the number that tells us the overall strength of QED interactions between electrons and photons, as calculated by diagrams like these.As Atiyah notes, in some sense α is a fundamental dimensionless numerical quantity like e or π. As such it is tempting to try to “derive” its value from some deeper principles. Arthur Eddington famously tried to derive exactly 1/137, but failed; Atiyah cites him approvingly.

But to a modern physicist, this seems like a misguided quest. First, because renormalization theory teaches us that α isn’t really a number at all; it’s a function. In particular, it’s a function of the total amount of momentum involved in the interaction you are considering. Essentially, the strength of electromagnetism is slightly different for processes happening at different energies. Atiyah isn’t even trying to derive a function, just a number.

This is basically the objection given by Sabine Hossenfelder. But to be as charitable as possible, I don’t think it’s absolutely a knock-down objection. There is a limit we can take as the momentum goes to zero, at which point α is a single number. Atiyah mentions nothing about this, which should give us skepticism that he’s on the right track, but it’s conceivable.

More importantly, I think, is the fact that α isn’t really fundamental at all. The Feynman diagrams we drew above are the simple ones, but to any given process there are also much more complicated ones, e.g.

And in fact, the total answer we get depends not only on the properties of electrons and photons, but on all of the other particles that could appear as virtual particles in these complicated diagrams. So what you and I measure as the fine-structure constant actually depends on things like the mass of the top quark and the coupling of the Higgs boson. Again, nowhere to be found in Atiyah’s paper.

Most importantly, in my mind, is that not only is α not fundamental, QED itself is not fundamental. It’s possible that the strong, weak, and electromagnetic forces are combined into some Grand Unified theory, but we honestly don’t know at this point. However, we do know, thanks to Weinberg and Salam, that the weak and electromagnetic forces are unified into the electroweak theory. In QED, α is related to the “elementary electric charge” e by the simple formula α = e2/4π. (I’ve set annoying things like Planck’s constant and the speed of light equal to one. And note that this e has nothing to do with the base of natural logarithms, e = 2.71828.) So if you’re “deriving” α, you’re really deriving e.

But e is absolutely not fundamental. In the electroweak theory, we have two coupling constants, g and g’ (for “weak isospin” and “weak hypercharge,” if you must know). There is also a “weak mixing angle” or “Weinberg angle” θW relating how the original gauge bosons get projected onto the photon and W/Z bosons after spontaneous symmetry breaking. In terms of these, we have a formula for the elementary electric charge: e = g sinθW. The elementary electric charge isn’t one of the basic ingredients of nature; it’s just something we observe fairly directly at low energies, after a bunch of complicated stuff happens at higher energies.

Not a whit of this appears in Atiyah’s paper. Indeed, as far as I can tell, there’s nothing in there about electromagnetism or QED; it just seems to be a way to calculate a number that is close enough to the measured value of α that he could plausibly claim it’s exactly right. (Though skepticism has been raised by people trying to reproduce his numerical result.) I couldn’t see any physical motivation for the fine-structure constant to have this particular value

These are not arguments why Atiyah’s particular derivation is wrong; they’re arguments why no such derivation should ever be possible. α isn’t the kind of thing for which we should expect to be able to derive a fundamental formula, it’s a messy low-energy manifestation of a lot of complicated inputs. It would be like trying to derive a fundamental formula for the average temperature in Los Angeles.

Again, I could be wrong about this. It’s possible that, despite all the reasons why we should expect α to be a messy combination of many different inputs, some mathematically elegant formula is secretly behind it all. But knowing what we know now, I wouldn’t bet on it.

August 13, 2018

A couple of months ago, the 2018 Gruber Prize in Cosmology was awarded to the Planck Satellite. This was (I think) a well-deserved honour for all of us who have worked on Planck during the more than 20 years since its conception, for a mission which confirmed a standard model of cosmology and measured the parameters which describe it to accuracies of a few percent. Planck is the latest in a series of telescopes and satellites dating back to the COBE Satellite in the early 90s, through the MAXIMA and Boomerang balloons (among many others) around the turn of the 21st century, and the WMAP Satellite (The Gruber Foundation seems to like CMB satellites: COBE won the Prize in 2006 and WMAP in 2012).

Unfortunately, the Gruber Foundation apparently has some convoluted rules about how it makes such group awards, and the PIs were not allowed to split the monetary portion of the prize among the full 300-plus team. Instead, they decided to share the second half of the funds amongst “43 identified members made up of the Planck Science Team, key members of the Planck editorial board, and Co-Investigators of the two instruments.” Those words were originally on the Gruber site but in fact have since been removed — there is no public recognition of this aspect of the award, which is completely appropriate as it is the whole team who deserves the award. (Full disclosure: as a member of the Planck Editorial Board and a Co-Investigator, I am one of that smaller group of 43, chosen not entirely transparently by the PIs.)

I also understand that the PIs will use a portion of their award to create a fund for all members of the collaboration to draw on for Planck-related travel over the coming years, now that there is little or no governmental funding remaining for Planck work, and those of us who will also receive a financial portion of the award will also be encouraged to do so (after, unfortunately, having to work out the tax implications of both receiving the prize and donating it back).

This seems like a reasonable way to handle a problem with no real fair solution, although, as usual in large collaborations like Planck, the communications about this left many Planck collaborators in the dark. (Planck also won the Royal Society 2018 Group Achievement Award which, because there is no money involved, could be uncontroversially awarded to the ESA Planck Team, without an explicit list. And the situation is much better than for the Nobel Prize.)

However, this seemingly reasonable solution reveals an even bigger, longer-standing, and wider-ranging problem: only about 50 of the 334 names on the full Planck team list (roughly 15%) are women. This is already appallingly low. Worse still, none of the 43 formerly “identified” members officially receiving a monetary prize are women (although we would have expected about 6 given even that terrible fraction). Put more explicitly, there is not a single woman in the upper reaches of Planck scientific management.

In the previous entry I wrote how hard it is to establish a new idea, if the only existing option to get experimental confirmation is to become very, very precise. Fortunately, this is not the only option we have. Besides experimental confirmation, we can also attempt to test an idea theoretically. How is this done?

The best possibility is to set up a situation, in which the new idea creates a most spectacular outcome. In addition, it should be a situation in which older ideas yield a drastically different outcome. This sounds actually easier than it is. There are three issues to be taken care of.

The first two have something to do with a very important distinction. That of a theory and that of an observation. An observation is something we measure in an experiment or calculate if we play around with models. An observation is always the outcome if we set up something initially, and then look at it some time later. The theory should give a description of how the initial and the final stuff are related. This means that we look for every observation for a corresponding theory to give it an explanation. To this comes the additional modern idea of physics that there should not be an own theory for every observation. Rather, we would like to have a unified theory, i.e. one theory which explains all observations. This is not yet the case. But at least we have reduced it to a handful of theories. In fact, for anything going on inside our solar system we need so far just two: The standard-model of particle physics and general relativity.

Coming back to our idea, we have now the following problem. Since we do a gedankenexperiment, we are allowed to chose any theory we like. But since we are just a bunch of people with a bunch of computers we are not able to calculate all the possible observations a theory can describe. Not to mention all possible observations of all theories. And it is here, where the problem starts. The older ideas still exist, because they are not bad, but rather explain a huge amount of stuff. Hence, for many observations in any theory they will be still more than good enough. Thus, to find spectacular disagreement, we do not only need to find a suitable theory. We also need to find a suitable observation to show disagreement.

And now enters the third problem: We actually have to do the calculation to check whether our suspicion is correct. This is usually not a simple exercise. In fact, the effort needed can make such a calculation a complete master thesis. And sometimes even much more. Only after the calculation is complete we know whether the observation and theory we have chosen was a good choice. Because only then we know whether the anticipated disagreement is really there. And it may be that our choice was not good, and we have to restart the process.

Sounds pretty hopeless? Well, this is actually one of the reasons why physicists are famed for their tolerance to frustration. Because such experiences are indeed inevitable. But fortunately it is not as bad as it sounds. And that has something to do with how we chose the observation (and the theory). This I did not specify yet. And just guessing would indeed lead to a lot of frustration.

The thing which helps us to hit more often than not the right theory and observation is insight and, especially, experience. The ideas we have tell us about how theories function. I.e., our insights give us the ability to estimate what will come out of a calculation even without actually doing it. Of course, this will be a qualitative statement, i.e. one without exact numbers. And it will not always be right. But if our ideas are correct, it will work out usually. In fact, if we would regularly not estimate correctly, this should require us to reevaluate our ideas. And it is our experience which helps us to get from insights to estimates.

This defines our process to test our ideas. And this process can actually be well traced out in our research. E.g. in a paper from last year we collected many of such qualitative estimates. They were based on some much older, much more crude estimates published several years back. In fact, the newer paper already included some quite involved semi-quantitative statements. We then used massive computer simulations to test our predictions. They were indeed as good confirmed as possible with the amount of computers we had. This we reported in another paper. This gives us hope to be on the right track.

So, the next step is to enlarge our testbed. For this, we already came up with some new first ideas. However, these will be even more challenging to test. But it is possible. And so we continue the cycle.