Category Archives: Physics

Post navigation

This is the title of a conference run by the Royal Society in London that I spent today and yesterday attending. Overall, there wasn’t anything particularly new or controversial said, and the level was mostly non-technical. Nevertheless, there were many good talks, albeit with plenty of overlap; I’ll quickly mention just a few.

The first talk, by Tom Kibble, was a lovely 40 minute recounting of the birth of the idea of spontaneous symmetry breaking in gauge theories from his perspective as one of its founders and as a member of Abdus Salam’s group at Imperial College. Another interesting talk was by Mikhail Shaposhnikov on his Higgs inflaton model of cosmology, which uses the Higgs field to drive inflation and incorporates scale symmetry. In addition, there were several experimental talks, including one by Fabiola Gianotti and Tejinder Virdee (former spokespersons for ATLAS and CMS, respectively) on the Higgs discovery/measurement and there were several other talks on searching for new physics at the LHC. One interesting fact from Chris Llewellyn-Smith in his talk on the genesis of the LHC—with apparently more such stories contained in his upcoming book—is that the LHC magnet casings are, because of him, coloured Oxford blue, which is, he remarked, unfortunately quite similar to Cambridge blue…

Nima Arkani-Hamed was in Oxford two weeks ago giving two lectures for the philosophy of cosmology conference “Anthropics, Selection Effects and Fine-Tuning in Cosmology.” He also gave two talks in the maths department on scattering amplitudes and a talk in the physics department about building a 100 km circular collider. I think that similar versions of all these talks can be found online, so I’ll just give a broad outline and some nice quotes.

In his first conference talk, “Naturalness and It’s Discontents: Why Is there a Macroscopic Universe?,” he talked mostly about naturalness in relation to the cosmological constant and hierarchy problems, stating that “naturalness problems are not an inconsistency of physics—rather, they’re a guide for what to expect,” and “that something big and structural is needed to remove the cosmological constant problem”. He said, “the broad idea of naturalness is being put under more pressure by the LHC, but I’m not more worried than pre-LHC since people already had to make excuses.” However, “it’s a little disquieting.” He mentioned that nothing else new at the LHC would represent a 1% fine-tuning for the weak scale, which has happened before in physics e.g. the moon eclipsing the sun and the low quadruple of the CMB. However, if it goes to a 1/10,000 tuning, “it would make the Higg’s much more special that these crappy examples.” On the multiverse: “Asking if we’re part of a multiverse isn’t a theory but a caricature of what a future theory might look like.” In response to a question at the end he said: “the manifold structure [of spacetime] is all in our heads–it’s better to phrase things in terms of high-energy scattering amplitudes.”

His second conference talk, “Space-Time, Quantum Mechanics and the Multiverse,” was about the physics motivation for considering a multiverse. Basically, “it’s the only scientific approach to the cosmological constant problem and it happens in some theories e.g. chaotic inflation, string theory, simple toy models.” He said that he suspects that making sense of the multiverse will require equally as radical a step as going from classical to quantum physics. He thinks this will involve getting rid of space-time and thinking carefully about what are the precise observable: “the really big mysteries are cosmological observables.”

He gave a talk in the maths department on the Amplituhedron. He started with a long discussion about how gravity implies the lack of local observables (except at infinity). He then talked about how the least action formulation of classical mechanics helped connect to the next level of description of (quantum) physics and how something similar will probably be needed to get rid of space-time. This is linked to the idea that scattering amplitudes written in terms of Feynman diagrams are made to have manifest locality and unitarity, which requires incorporating gauge redundancy, but in the alternative approach scattering amplitudes for a particularly symmetric (N=4 SYM) theory come out more simply as the “volume” of some region in some space, which means that unitarity and locality are derived. He then spent quite some time defining the amplituhedron idea as a generalisation of the inside of a triangle. (There was a follow-up technical talk the next day that I didn’t attend.)

His collider talk, “Motivations for 100 km collider,” began with: “every physics point in this talk is obvious,” with his view that the main motivation for building a collider being that it’s the obvious future of the field. He said, “we’ve never had a consistent theory valid up to exponentially higher energies–this is a qualitatively different scenario from what we’ve seen in the past,” but that in every scenario he can imagine we will need a 100 TeV pp machine–there are deep structural issues in QFT at stake in finding out if the weak scale is natural. If we don’t find anything more at the LHC, there’ll be a 10% tuning and we’ll want to know if it’s more (the tuning goes as the square of the machine energy) since it’s significant to say that the weak scale is 100 times more tuned than other examples. He also said that he thinks that the Higgs find is undersold: a light Higgs boson means that our vacuum is qualitatively different to a random condensed matter system (“it’s not some crappy metal”). When talking about the 1% tuning in the fact that neutrons don’t bind he said that when he tried to learn nuclear physics as graduate student he found it really confusing. He also talked about his visit to China to discuss building a 100 km collider there and about the new centre for future HEP being set-up in Beijing, which he’s going to spend 2-3 months at every year for the next two years. He thinks there’s a greater than 1% chance that they’ll actually build it, which is why he’s spending that much time there. He noted that a good thing about building the next large collider under an authoritative regime is that you only have to convince a few people (unlike in the US). He also mentioned that he had a one hour conversation with Al Gore in which he apologised for the SSC cancellation (he was largely responsible for that). In response to a question about split-SUSY (I think), he said: “the psychological thing with model building is that you have to believe it’s right at the time so you’re motivated to work out its consequences, and when you’re done you forget about it. I wish I didn’t need these psychological crutches, but life is hard.”

This is a neat article by John Baez and Emory Bunn which discusses the geometrical meaning of Einstein’s equations. It includes this characterization of Einstein’s equations in one plain English sentence:

Given a small ball of freely falling test particles initially at rest with respect to each other, the rate at which it begins to shrink is proportional to its volume times: the energy density at the center of the ball, plus the pressure in the x direction at that point, plus the pressure in the y direction, plus the pressure in the z direction.

I’m ripping off a blog post by Peter Coles–which was itself taken from a comment to a post on Sean Carroll’s blog–by pointing out some interesting articles by Avi Loeb that discuss cosmological conformism, cosmological conservatism, and rating the potential success of various research areas. I’ll briefly discuss Loeb’s articles in chronological order.

In this article, Loeb argues that young researchers ought to allocate time to innovative, high-risk, high-reward research areas, as well as to the more conservative mainstream research agendas. Loeb discusses the cultural barriers to this type of research. One of the obvious troubles is that:

Clearly, failure and waste of time are a common outcome of risky projects, just as the majority of venture capital investments lose money (but have the attractive feature of being more profitable than anything else if successful). The fear of losses is sure to keep most researchers away from risky projects, which will attract only those few who are willing to face the strong headwind. Risky projects are accompanied by loneliness. Even after an unrecognized truth is discovered, there is often persistent silence and lack of attention from the rest of the community for a while. This situation contrasts with the nurturing feedback that accompanies a project on a variation of an existing theme already accepted by a large community of colleagues who work on the same topic.

He then gives some examples of low-, medium, and high-risk research, and suggests that astrophysics postdocs should adopt a 50-30-20 distribution of research time to low-, medium-, and high-risk topics, respectively, as opposed to the usual 80-15-5 distribution. Although this article concerns theoretical astrophysics research, I suspect that most of this carries over directly to the rest of theoretical physics.

In the next article, Loeb proposes and discusses the idea of a website run by graduate students that uses publicly available data to assess “the future dividends of various research frontiers”. I quite like this idea in principle. One concern would be that such an assessment would not be any more objective than, say, university rankings, which are frequently criticised for giving seemingly arbitrary weightings to the various factors used in their evaluation metrics. In Loeb’s scheme, this problem is dealt with by using historical data to calculate the weightings that would correctly predict a research areas likelihood of success. Of course, in practice there are enough ill-defined concepts involved that this couldn’t be implemented without bias. As to whether or not this is nevertheless a useful enough idea to implement, I’m undecided.

This article, titled “How to Nurture Scientific Discoveries Despite Their Unpredictable Nature”, suggests that funding agencies should give more support to open research that has no programmatic agenda because the potential benefits from unexpected breakthroughs are so vast that they outweigh the high risk of failure. It’s persuasively written and I completely agree with the main idea: that it’s important to financially support risky innovation, as well as established physics. I’m too ignorant to know whether current practice underfunds research “without programmatic reins tied to specific goals”, but based on what I’ve read in books and on blogs, it’s probably the case. Here’s the final paragraph, mainly because he managed to incorporate a biblical reference:

Progress is not linear in time and sometimes it is even inversely proportional to the contemporaneous level of invested effort. This is because progress rests on lengthy preparatory work which lays the foundation for a potential discovery. Therefore, it is inappropriate to measure success based on the contemporaneous level of allocated resources. Lost resources (time and money) should never be a concern in a culture that is not tied to a specific programmatic agenda, because the long-term benefits from finding something different from what you were seeking could be at an elevated level, far more valuable than these lost resources. This echoes a quote from 1 Samuel (Chapter 9, 20), concerning the biblical story of Saul seeking his lost donkeys. The advice Saul received from Samuel, the person who crowned him as a king after their chance meeting, was simple: “As for the donkeys you lost three days ago, do not worry about them…”.

The most recent article, from May this year, encourages senior scientists to mentor young astrophysics researchers to be bold, creative “architects”, rather than conservative “engineers”. (This reminds me of Lee Smollin’s discussion of “seers” and “craftspeople”.) The opening paragraph sums this up nicely:

Too few theoretical astrophysicists are engaged in tasks that go beyond the refinement of details in a commonly accepted paradigm. It is far more straightforward today to work on these details than to review whether the paradigm itself is valid. While there is much work to be done in the analysis and interpretation of experimental data, the unfortunate by-product of the current state of affairs is that popular, mainstream paradigms within which data is interpreted are rarely challenged. Most cosmologists, for example, lay one brick of phenomenology at a time in support of the standard (inflation+Λ+Cold-Dark-Matter) cosmological model, resembling engineers that follow the blueprint of a global construction project, without pausing to question whether the architecture of the project makes sense when discrepancies between expectations and data are revealed.

The roots of this conformism are obvious:

The unfortunate reality of young astrophysicists having to spend their most productive years in lengthy postdoctoral positions without job security promotes conformism, as postdocs aim to improve their chance of getting a faculty job by supporting the prevailing paradigm of senior colleagues who serve on selection committees.

He goes on to argue why modern cosmology needs architects:

Some argue that architects were only needed in the early days of a field like cosmology when the fundamental building blocks of the standard model, e.g., the inflaton, dark matter and dark energy, were being discovered. As fields mature to a state where quantitative predictions can be refined by detailed numerical simulations, the architectural skills are no longer required for selecting a winning world model based on comparison to precise data. Ironically, the example of cosmology demonstrates just the opposite. On the one hand, we measured various constituents of our Universe to two significant digits and simulated them with accurate numerical codes. But at the same time, we do not have a fundamental understanding of the nature of the dark matter or dark energy nor of the inflaton. In searching for this missing knowledge, we need architects who could suggest to us what these constituents might be in light of existing data and which observational clues should be searched for. Without such clues, we will never be sure that inflation really took place or that dark matter and dark energy are real and not ghosts of our imagination due to a modified form of gravity.

In the original post by Sean Carroll that I mentioned at the start of this post, which is worth reading, Sean plays devil’s advocate to this idea. He ends with this sobering perspective:

Then again, you gotta eat. People need jobs and all that. I can’t possibly blame anyone who loves science and chooses to research ideas that are established and have a high probability of yielding productive results. The real responsibility shouldn’t be on young people to be bomb-throwers; it should be on the older generation, who need to be willing to occasionally take a bomb to the face, and even thank the bomb-thrower for making the effort. Who knows when an explosion might unearth some unexpected treasure?

“The reason for trying to understand the universe isn’t that we thereby blunder into a new material for coating non-stick frying pans. It’s that we gain insight into our place in the scheme of things, and of just how wonderful and unexpected that scheme can be. The aim of science is not just the manufacture of new toys: it’s the enrichment of the human spirit.”

Fearful Symmetry: Is God a Geometer? by Ian Stewart and Martin Golubitsky is a popular science book from the early 90’s on the subject of symmetry, with an emphasis on the fascinating phenomenon of symmetry breaking. The authors take the reader on an idiosyncratic tour of this subject, discussing cosmology, crystallography, and biology, including, for example, detailed discussions of the Couette-Taylor fluid system, animal gaits, and embryology. Throughout this book is also contained a sprinkling of sage discussion on the philosophy of science, as the opening quote attests to.

There are many popular books in existence on the topic of symmetry; what makes this book stand out is that the authors have assumed a bit of intelligence on behalf of the reader and this allows Fearful Symmetry to be more sophisticated than your average pop sci book.

However, as a physics student, I was disappointed by what was omitted in the discussion of the standard model (SM) of particle physics. The authors did, to their credit, emphasis the fundamental importance of symmetries in the SM—they even explicitely named the SU(3) gauge symmetry of quantum chromodynamics—and they also mentioned that the electroweak symmetry breaks at low energies, which results in the apparently distinct electromagnetic and weak force. Unfortunately though, there was no mention of the role of symmetry breaking in giving masses to the W and Z bosons and all of the fundamental fermions—a process called the Higg’s mechanism, which is responsible for the weakness of the weak force and the mass of electrons. This, in my biased opinion, is one of the most—if not the most—important examples of symmetry breaking in nature, so it is a shame that it wasn’t included.

Another thing that annoyed me was the statement of the purported mystery that all particles of the same type are identical; this is indeed a deep empirical fact that needs explaining, but it’s not a mystery in the context of the very successful field-based theories of modern particle physics: all particles of the same type are identical (up to a minus sign) because they arise from the same underlying field. There were also some out-dated references to the promise of grand unified theories and string theory, but that’s only because the book was written two decades ago.

Admittedly, I’m being pedantic. It’s only because this book is more intellectual and goes deeper than most non-technical accounts that I hold it to high standards, so this criticism should be taken as a compliment to the rest of the book. Overall, the authors do a good job of explaining the unassuming ubiquity of symmetry and symmetry breaking in the real world, which makes for some fairly interesting reading on an important topic.

I recently came across the 2006 edition of Richard Feynman’s wonderful book “QED: The Strange Theory of Light and Matter,” published by Princeton University Press. This book deserves a blog post all to itself, but here I only want to point out the shiny new feature of this most recent edition–surely a publishing scam designed to make me buy the book twice–which is an excellent introduction by Anthony Zee.

Fortunately, it turns out that I don’t need to re-buy the book since Zee has published the introduction online. I recommend reading Zee’s frank and humorous introduction to any aspiring physicists or physics fans out there.

I want to sort out the names of various effects and what-not related to the cosmic microwave background (CMB) radiation, which probably makes for boring reading material. Oh well. In particular, I’m going to summarise four effects that affect the density fluctuations in the CMB:

The Sunyaev-Zel’dovich effect

The Sachs-Wolfe effect

Diffusion damping

Baryon Acoustic Oscillation

The Sunyaev-Zel’dovich effect is when CMB photons gain a boost in energy via their inverse Compton scattering from electrons in the hot gas surrounding galaxy clusters.

The Sachs-Wolfe effect is the gravitational redshift of photons either at the surface of last scattering, this is called the “non-integrated Sachs-Wolfe effect”, or not at the surface of last scattering, in which case it is called (you guessed it) the “integrated Sachs-Wolfe effect”.

Diffusion damping is the redshifting of photons as they escape gravitational wells caused by density anisotropies in the CMB plasma. Note that this has the effect of flattening out density anisotropies.

Lastly, the Baryon Acoustic Oscillation (BAO) is the imprint of acoustic vibrations in the primordial plasma caused by the opposing forces of pressure and gravitational attraction. Measuring the angular peaks of the BAO tells cosmologists lots of useful information about the universe and helps us constrain cosmological parameters .