An independent scientist’s observations on society, technology, energy, science and the environment. “Modern science has been a voyage into the unknown, with a lesson in humility waiting at every stop. Many passengers would rather have stayed home.” – Carl Sagan

Archive for the ‘physics’ Category

It’s a common misconception that controlled nuclear fusion in a device like a magnetic confinement reactor such as ITER requires you to heat the material up to a temperature of hundreds of millions of degrees.

That’s a little bit of a subtle issue, and in the context of the everyday, familiar notion of temperature, it’s kind of complicated. It’s a plasma temperature, an ion temperature. The Lawson product is minimised for a plasma temperature of 25 keV, for deuterium-tritium fusion, so that’s the optimum plasma temperature of the DT plasma in the reactor. Having such a plasma at such an effective temperature doesn’t really compare to materials being heated up in our familiar everyday experience.

In a colour TV, for example, (The CRT kind, not the newfangled kinds) the electrons are accelerated across a potential of approximately 25 kV – the electrons are accelerated to an energy of 25 keV, corresponding to a effective “temperature” of 290 million K. But clearly the TV tube isn’t heated up to a temperature of 290 million K in the conventionally familiar sense.

It’s still occasionally heard from some sources, even after the technology has been in widespread use for many, many years, that the common household ionisation smoke detectors, which contain a very small radioactive source, present some kind of health hazard.
They don’t.

Such devices usually contain a sealed source, of 0.9 to 1.0 μCi of Americium-241. The source itself is a tiny little thing, about three millimeters in diameter – it’s a point source. *

241Am principally decays by emission of an alpha particle at 5.49 MeV, but this is not of any significance, since the α-particle cannot escape the device at all. However, 241Am also emits some low-energy gamma photons as it decays, principally a gamma ray of 59 keV, and it is this γ-radiation that can pass through the device and conceivably result in some dose to the householder, possibly.

The dose rate, then, to the whole body from an external exposure to a point radioactive source which is emitting photon radiation is just the Γ-factor for the particular nuclide, multiplied by the activity of the source, multiplied by the familiar old inverse-squared distance term.

Suppose you’ve got such a detector on the ceiling, right next to your bed, which would place you about , say, 3 m away from the source. Suppose, additionally, that you spend your entire life in that bed. Of course, here I’m taking the most conservative scenario possible, to set an upper bound on the plausible dose.

Of course, the average worldwide dose from natural background radiation is almost 1000 times that – around 2.4 to 3 millisieverts per year, and in some places, far, far higher.

However, there is a more surprising and interesting context that we can put such a dose rate in. If you sleep in bed next to your partner every night, then the ionising radiation dose that you receive, due to the radioactivity of your partner’s body, from 40K and things like that, is about five microsieverts per year. [Source]

That’s assuming a realistic amount of sleep each day, of course – if you were actually in bed 24 hours per day, the dose from this source would be three times as much; 15 μSv per year.

Thus – the dose from sleeping next to your partner is 4.9 times what it is from the smoke detector. Surprising, isn’t it?

Obviously there is no reason to expect any significant health physics implications of any kind at such extremely low doses. Heck, such low doses are probably even too small to have potential significance with regards to radiation hormoesis. But we do know they’re proven to be extremely effective at protecting against the threat of fire destroying your home.

* Just as an aside, although technically illegal in the US and possibly in other localities as well, people (usually physicists and people that do know what they’re doing) do often remove these sources, and use them for educational demonstrations of radioactivity, of Geiger-Marsden style scattering, charged particle absorption, and to test and calibrate alpha and X-ray detectors – it’s quite tempting, since these devices are far, far less expensive than purchasing exactly the same sort of tiny sealed radioactive sources through “proper” channels, and no more dangerous.

No, black holes produced in the Large Hadron Collider (LHC) aren’t going to kill you, eat Geneva, or destroy the Earth. If those black holes are formed, it will be really quite fantastic, and it will represent everything that we’ve hoped for from LHC and more.

I’ll take a couple of hours from my Saturday afternoon to write this, and hopefully it will reduce, even if just infinitesimally, the number of stupid conversations I have to listen to where people’s entire knowledge of particle physics comes from the Herald Sun. I consider that a worthwhile investment of my time, and if it means that I don’t have to hear about teenage girls tragically taking their own lives for absolutely no reason at all because of such nonsense, then I feel that that’s a really good thing, too.

If the centre-of-mass energy of two colliding elementary particles (a maximum of 14 TeV for collisions in the Large Hadron Collider) reaches the Planck scale, , and their impact parameter, , is smaller than the corresponding Schwarzschild radius, , then a black hole will indeed be produced. However, the energy corresponding to the Planck scale, , is a lot of energy, if you’re an experimental physicist. Such energies are entirely outside the reach of the experimental physicist – so, surely, generation of microscopic black holes (hereafter, ) at the Large Hadron Collider (LHC) has got to be impossible – doesn’t it?

According to the Standard Model of Particle Physics, generation in a particle collider is indeed impossible at the TeV-scale energies associated with the current generation of high energy experimental particle physics endeavors, such as the LHC. The much-publicised speculations regarding the possibility of formation at the LHC are based on speculative hypotheses derived from theoretical models of cosmology and particle physics beyond the standard model (“new physics”).

Certain models put forward some years ago by theoretical physicists offer a seemingly neat and efficient lead into answering the questions, such as those of the hierarchy problem, of interest to particle physicists, and involve the existence of higher spatial dimensions.

The novelty of these higher-dimensional models lie in the fact that it is no longer necessary to assume that these dimensions are of sizes close to the Planck length (). Rather, large extra dimensions could be as large as around a millimetre, if we suppose that the `fields of matter’ – those fields of relevance to electroweak interactions and QCD, for example – `live’ in the 3+1 dimensional hypersurface of our 3-brane – our familiar 3+1 dimensional world – and that only the gravitational field can interact across the higher-dimensional universe. Experiments involving the direct measurement of Newtonian gravity put upper bounds on the size of extra dimensions to a value of less than a few hundred microns. Under such an approach, the traditional Planck scale, corresponding to , is no more than an effective scale and the real fundamental Planck scale in dimensions is given by , where is the volume associated with the extra dimensions. In 10 dimensions, with radii associated with the extra dimensions of the Fermi scale, we find .

OK, now, if you’re thinking that that all sounds horribly complicated, I understand. I don’t understand it all, either. Allow me to try and re-explain that.

Imagine two particles being smashed together within the collider. As they come together, their gravitational interaction increases according to the familiar inverse square law of classical gravitational physics. The formation of a black hole, at least the astrophysical ones with which we’re familiar, is a phenomenon which is all about gravity. For the force of gravity to become strong enough to create a black hole in our proton-proton system, these protons would have to be bought together inside a distance on the scale of the Planck length – . The energy corresponding to such a distance scale is – an energy that dwarfs the most energetic cosmic rays known – which themselves dwarf the proudest achievements of Earthly accelerator physicists.

But this all assumes that the gravitational force exists only within the familiar world of three dimensions. If gravity extends across the higher spatial dimensions, the force of gravitational interactions increases much more rapidly with increasing proximity of the particles, and at very small length scales associated with high energy experimental particle physics, it’s just barely possible that such phenomena can start to become relevant.

So suppose when the particles interact with these very high energies, they’re interacting on the scale of the higher-dimensional world. As gravity “becomes strong”, black hole formation could start to become relevant on a length scale on the order of . That is still an inconceivable scale compared to everyday experience, but it is a factor of ten thousand trillion times closer to our reach than the Planck scale in a three-dimensional universe. And this length scale corresponds to energies on the order of only a couple of teraelectron volts. It is these TeV-scale energies that are well within the grasp of the LHC.

NB: For an extremely readable introduction to these possibly daunting concepts of theoretical particle physics and cosmology, which is fully accessible to all readers, with no prior knowledge of advanced physics required, Lisa Randall’s Warped Passages is suggested reading.

If such models are have any meaning, it is effectively a natural choice (and not an arbitrary one based on phenomenological motivations) because it essentially presents a resolution to the heirarchy problem. [1]

If the signature of the decay of a microscopic black hole is observed within the LHC’s detectors, then on that day, these fascinating models are no longer theoretical physics – they represent our best empirically-motivated description of what nature is.

If the Planck scale is thus accessible down into the TeV range, then -generation in TeV-scale particle collider experiments is indeed possible. It could be, maybe. The 14 TeV centre-of-mass energy of the LHC could allow it to become a black hole factory with a production rate as high as perhaps about .

Many studies are underway to make a precise evaluation of the cross-section for the creation of black holes via parton collisions, but it appears that the naive geometric approximation, , is quite reasonable for setting the orders of
magnitude. [1]

The possibility of the presence of large extra dimensions would be doubly favourable for the production of black holes. The key point is that it allows the Planck scale to be reduced to accessible values, but additionally, it also allows the Schwarzschild radius to be significantly increased, thus making the condition for black hole formation distinctly easier to satisfy.

One notable property of any microscopic black holes that could result from LHC collisions is that these black holes will have radii corresponding to the TeV scale – – much smaller than the size of the large extra dimensions. Hence, any can be considered as totally immersed in a D-dimensional space (which has, to a good approximation, a time dimension and large (non-compact) spatial dimensions.) It follows that such a does indeed rapidly `evaporate’ into fundamental particles, with an extremely short lifetime, on the order of . [1] This argument is exclusively based on the same theoretical physics that predicts the possibility of microscopic black hole formation at TeV-scale energies, and is completely independent of the oft-cited argument regarding Hawking radiation.

The temperature of the , typically about 100 GeV under such conditions, is much lower than it would be for a black hole of the same mass in a four-dimensional space, but nevertheless, the black hole retains the expected characteristic quasithermal radiation spectrum corresponding to its temperature.

In the case of the hypothetical microscopic black holes, if they can be produced in the collisions of elementary particles, they must also be able to decay back into elementary particles. Theoretically, it is expected that microscopic black holes would indeed decay via Hawking radiation, a mechanism based on fundamental physical principles, for which there is general consensus as to the validity.

It is well established in the literature that there is no way that a produced at the LHC could possibly be able to accrete matter in a potentially Earth-destroying fashion, even if, somehow, it turned out that such a black hole could be particularly stable. [3] [4] If the physical models that provide a basis for considering the possibility of formation in TeV-scale particle collisions are indeed valid, then microscopic black holes could be produced not only by our particle collider experiments, but also in high energy cosmic ray interactions, and those black holes would have stopped in the Earth or in other astronomical bodies. The stability of these astronomical bodies demonstrates that such black holes, if they exist, cannot possibly present any credible danger of planetary destruction.

Familiar astrophysical black holes have very large masses, ranging from several solar masses to perhaps as high as a billion solar masses for the largest supermassive black holes. On the other hand, the maximum centre-of-mass energy of collisions in the LHC corresponds to an equivalent mass of the order of solar masses. If a microscopic black hole is produced in the LHC, it will have a mass far, far, far smaller than any black hole with which astrophysicists may be familiar, and possibly markedly different characteristics as well.

The rate at which any stopped black hole would accrete the matter surrounding it and grow in mass is dependent on how it is modeled. Several scenarios for hypothetical matter accretion into a terrestrial black hole have been studied and reported in the literature, where well-founded macroscopic physics has been used to establish conservative or worst-case-scenario limits to the rate of accretion of matter into a terrestrial black hole.

In the extra-dimensional scenarios that motivate the possibility of formation at TeV-scale energies (but which also motivate the extreme instability of those black holes), the rate at which the would accrete matter would be so slow, in a ten-dimensional universe, that the Earth would survive for billions of years before any harm befell it – a limiting time scale for Earth’s survival which is comparable to that already set by the finite main-sequence lifetime of the Sun.

At the intersection of astrophysics and particle physics, cosmology and field theory, quantum mechanics and general relativity, the possibility of -production in particle collider experiments such as the LHC opens up new fields of investigation and could constitute an invaluable pathway towards the joint study of gravitation and high-energy physics. Their possible absence already provides much information about the early universe; their detection would constitute a major advance. The potential existence of extra dimensions opens up new avenues for the production of black holes in colliders, which would become, de facto, even more fascinating tools for penetrating the mysteries of the fundamental structure of nature. [1]

The production of microscopic black holes at the LHC is just barely possible, perhaps. These microscopic black holes do not represent some kind of doomsday scenario for the Earth – quite the opposite, in fact.
They represent, arguably, some of the most incredible insights into exciting new physics that we could wish to take away from the LHC, with profoundly interesting implications for the future of physics.

There have also been suggestions over the years that there may exist magnetic monopoles, particles with non-zero free “magnetic charge”. As was originally established by Dirac, any free magnetic charge on a monopole will be quantised, as is electric charge, and necessarily much larger in magnitude than the elementary quantum of electric charge. For this reason, past efforts to look for evidence of a magnetic monopole have looked for strongly ionising particles with quantised magnetic charge.

In some grand unified theories, though not in the Standard Model of particle physics, magnetic monopoles are predicted to possibly be able to catalyse the decay of the protons comprising ordinary baryonic matter, into leptons and unstable mesons. If this is the case, successive collisions between a monopole and atomic nuclei could release substantial amounts of energy. Magnetic monopoles which may possess such properties are predicted to have masses as high as , [4] or higher, making them far too massive to be produced at the LHC.

The impact of such magnetic monopoles interacting with the Earth has been presented and quantitatively discussed in the literature [2], where it was concluded that the potential for a monopole to catalyse nucleon decay would allow the monopole to destroy nothing more than an infinitesmal, microscopic quantity of matter – nucleons – before exiting the Earth.

Independent of this conclusion, if magnetic monopoles could be created in collisions in the LHC, high-energy cosmic rays would already be creating numerous magnetic monopoles, with many of them striking the Earth and other astronomical bodies. Due to their large magnetic charges and strong ionising effect, any magnetic monopoles thusly generated would be rapidly stopped within the Earth.

The continued existence of the Earth and other astronomical bodies over billions of years of bombardment with ultra-high-energy cosmic rays demonstrates that any such magnetic monopoles can not catalyse proton decay at any appreciable rate. If particle collisions within the LHC could produce any dangerous magnetic monopoles, high-energy cosmic ray processes would already have done so. [3] [4]

As with the consideration of the possbility of the formation of black holes in TeV-scale particle collisions at LHC, the continued existence of the Earth and other astronomical bodies such as the Sun demonstrate that any magnetic monopoles produced by high-energy processes – be they in a particle collider or in a high-energy cosmic ray interaction – must be harmless.

As with the case of black hole production within the LHC, these arguments are not to say that such a process could not occur – only that such processes are not dangerous.

Whilst the existence of a magnetic monopole is viewed with the most extreme skepticism by the overwhelming majority of physicists, perhaps there is just the most remote possibility that it could be. The formation of a magnetic monopole at the LHC will not endanger the Earth, but if such a monopole was to be detected, we would of course find ourselves faced the prospect of re-writing two hundred years of physics.

As with the formation of a microscopic black hole, the formation of a magnetic monopole does not represent some kind of doomsday scenario – it represents one of the most incredible revelations we could possibly hope to find from experiments at the LHC.

This post was inspired, in part at least, by Rod Adams’ post on NEI Nuclear Notes recently, asking about the georeactor theory. I hope you find it useful, Rod.

The “georeactor” hypothesis is a proposal by J. Marvin Herndon that a fissioning critical mass of uranium may exist at the Earth’s core and indeed that it serves as the energy source for the Earth’s magnetic field. You can read all about Herndon’s ideas at his website.

Herndon’s georeactor hypothesis is not widely accepted at all by the scientific community, outside of Herndon himself and a very small number of defenders.

Herndon’s georeactor hypothesis is sometimes confused with the existence of natural nuclear fission reactors in the Earth’s crust in rich uranium deposits at Oklo in Western Africa – however, it must be stressed that these are not the same thing – there is absolutely no doubt at all, scientifically, as to the occurrence of nuclear fission and the formation of natural nuclear “reactors” at Oklo approximately two billion years ago.

However, Rob de Meijer and associates at the Nuclear Physics Institute in Groningen, the Netherlands, are amicable towards Herndon’s theory, and have indeed proposed an experiment by which it should be somewhat falsifiable – involving measurement of the antineutrino flux from the Earth’s core which they believe will validate the georeactor hypothesis.

Fission reactors generate huge numbers of electron antineutrinos – about 10^26 per day from a typical manmade power reactor. Several thousand of these can be measured per day in a detector of modest size, outside the reactor, outside the containment, tens of meters away.

The antineutrinos resulting from each fission event from uranium and plutonium have different total count rates and energy spectra – the antineutrinos are not actually produced by nuclear fission itself, but rather by the beta decay of fission products. The antineutrinos therefore carry with them information about the amount and type of fissile material in the reactor core, and the rate at which it is being fissioned.

Because of this, incidentally, the use of neutrino detectors has raised considerable interest in recent times in the context of providing a real-time online and very simple measurement of the fuel burnup, operating status, power level, plutonium production and such characteristics of operating nuclear reactors, which is of considerable utility in enforcing non-proliferation safeguards.

(There’s more information on this application here if you’re interested.)

Personally, I don’t see why existing underground neutrino observatories, such as Super-Kamiokonde, the Sudbury Neutrino Observatory, and the IceCube experiment in Antarctica shouldn’t be sufficient to provide significant insights into the presence – or absence – of georeactor antineutrinos. Clearly all neutrinos from a “georeactor” come exclusively from exactly the centre of the earth as observed at every detector, and they should be detectable at all neutrino observatories worldwide with a similar flux everywhere.

Combining these simple pieces of information with the expected energy spectra of neutrinos from uranium fission, it seems extremely plausible that the georeactor hypothesis can well and truly be put to the test, using existing experiments, and probably even with existing collections of raw data from these experiments.

Uranium, being incompatible in an iron-based alloy, is expected to precipitate at a high temperature, perhaps as the compound US. As density at Earth-core pressures is a function almost exclusively of atomic number and atomic mass, uranium, or a compound thereof, would be the core’s most dense precipitate and would tend to settle, either directly or through a series of steps, by gravity to the center of the Earth, where it would quickly form a critical mass and become capable of self-sustained nuclear fission chain reactions.

Of course, there is what seems like one significant problem with this theory – whilst several billion years ago, the portion of uranium-235 in natural uranium was much higher than it is today – equivalent to that of manmade enriched uranium, because U-235 decays faster than U-238, although a much larger ratio of U-235 was originally formed when the uranium was formed inside supernovae than is seen in the Earth today. That is why fission occurred at Oklo two billion years ago, but does not occur today – there is not enough of a concentration of U-235 in nature. Therefore, how can a “georeactor” exist?

Herndon explains away this question by postulating that the georeactor is something like a fast breeder reactor, started up aeons ago when the U-235 was more abundant, and today burning the abundant U-238 into plutonium-239.

However, if this is the case, couldn’t it be likely that we could observe plutonium-fissioning “breeder reactors” in rich uranium deposits in the Earth’s crust, like at Oklo, today?

Sustainable Energy – Without the Hot Air is a popular book written by David J.C. MacKay, who is Professor of Natural Philosophy in the department of physics at the University of Cambridge. It’s currently available for download, but it is still at the draft stage.

Read some of this – isn’t it great! I haven’t read the whole thing yet, but it really looks impressive to me, it’s saying the things that I really think need to be said.

How can we replace fossil fuels? How can we ensure security of energy supply? How can we solve climate change?

We’re often told that ‘huge amounts of renewable power are available’ – wind, wave, tide, and so forth. But our current power consumption is also huge! To understand our sustainable energy crisis, we need to
know how the one ‘huge’ compares with the other. We need numbers, not adjectives.

This heated debate is fundamentally about numbers. How much energy could each source deliver, at what economic and social cost, and with what risks? But actual numbers are rarely mentioned. In public debates, people just say “Nuclear is a money pit” or “We have a huge amount of wave and wind.” The trouble with this sort of language is that it’s not sufficient to know that something is huge: we need to know how the one ‘huge’ compares with another ‘huge’, namely our huge energy consumption. To make this comparison, we need numbers, not adjectives.