Pages

Monday, December 17, 2012

The video below is a fractal. It is also three dimensional. It has also been rendered from two different locations very close to each other. Therefore, you can also see it in three dimensions.

If you're not used to using YouTube's 3d capabilities then don't worry, this guy has a tutorial video explaining how to see the full three-dimensionality of the video without the need for glasses. It's just like magic eye in reverse, basically (though I'm not sure how good it is for your eye muscles).

None of the Mandelbulb, or the Mandelbrot set or the 3d fractal (a Mandelbox) shown above were designed by a human mind. All of the complexity found in the images comes about from defining structures in two or three dimensional space as the set of points that are or are not solutions to relatively simple mathematical algorithms. For example, the algorithm describing the Mandelbrot set can be described in just one line:

"... the Mandelbrot set is the set of values of c in the complex plane for which the orbit of 0 under iteration of the complex quadratic polynomial \(z_{n+1} = z_n^2 + c\) remains bounded".

All of the complexity you can see in the entire ten minutes of the Mandelbrot set video I linked to above is defined in that one simple sentence.

Monday, December 10, 2012

Animations of scientific principles are becoming more and more popular as a way of condensing complex data into an easily accessible format, particularly in the field of biology. Nonetheless, a recent article in Nature has raised a number of interesting points about how the visualisation of biological processes should not be taken lightly. Biology is unnervingly complex and there is still much that we don't understand - how are we to know how much of an animation is based on actual data and how much is just 'filling in the gaps'? This is not limited to the layperson - humans are very visual creatures and we are more easily swayed by pictures than words, experts are no exception. This is not new, journals have included idealised representations of biological processes for decades, but the advancement in computer animation has opened the door for more sophisticated animations that may imply a more thorough understanding where one does not exist.

That said, I don't believe that researchers actively seek to mislead when presenting their findings in animated form, rather that they have to take the necessary steps to complete the movie - inherently requiring some artistic licence. And, for the most part, the bits being filled in are done so with reasonable scientific assumptions in mind and are not wild fantasy. The medium is an exciting one, and one that will hopefully play a significant role in not only disseminating scientific understanding, but also help to further research by highlighting gaps in our understanding. We must, however, always be vigilant when interpreting these animations as they are exactly that - animations - and not actual footage of molecular biology.

An excellent example of biological animation is the 'Inner Life of a Cell' video by a group in Harvard. I love this video, which depicts the events that occur upon the activation of T cell, and is pretty accurate in that almost everything show is backed up by real evidence. The 'motor protein' kinesin at 3:40 is particularly impressive because its mechanism of 'walking' along microtubules is backed up by extensive structural and biochemical studies, yet it just looks so much like a drunk guy who's been pulled over by the police and is trying to walk in a straight line! If you get the chance, I really recommend watching the video and reading the article mentioned above. Enjoy!

Tuesday, December 4, 2012

The previous post in this series can be found: here.In the first post of this 'human machine' series, I explained how 'energy' (that abstract entity) is processed and used by our bodies in order to converted the chemical energy in our food into the work energy required to keep us ticking over nicely. I discussed in this how we are all actually powered by electrical circuits that buzz along in the internal membranes of our cell's power stations, the mitochondria. Better yet, not only are we powered by currents of electrons, familiar to us as standard electricity, but also by currents of protons, and so are actually working off energy being extracted from two forms of electrochemical potential. We're pretty sophisticated machines!

The work energy generated by these processes is used in myriad ways, but one very important one is the creation of another electrical current that is the foundation of everything you've ever done and every thought you've ever had: the neuronal action potential. This is the electrical signals that run along the neurons in your brain and body in general, constantly relaying information back and forth throughout the whole complex machine. Without it we would be like plants, with one part of our bodies completely unaware of what's happening to the rest of it, and animal life as it is familiar to us would be entirely impossible. Most people have, I expect, heard of the notion of electrical signals running throughout our bodies (it's why the machines built the Matrix, right?), but few will actually know what that means. In today's post I'm going to be talking about what neuronal signals actually are, and so explain why being hit by lightning is a bad thing but being defibrillated (like in ER) can be a good thing.

Monday, November 26, 2012

For as long as I can remember being even remotely aware of international politics I have known one thing to be certain: Arabs and Israelis don't get on! It's a fact that I've grown up knowing and has been reinforced time and time again in recent years with seemingly unbreakable cycles of violence and ever more dangerous and confrontational political rhetoric from both sides. The most recent exchanges of ammunition between Gaza and Israeli cities is simply the latest chapter in this sad tale of a fractured region.

But how will the story end? Little diplomatic progress has been made in the resolution of the conflict in the 60 years that it has been raging, and indeed the conflict has spread to bring countries such as Iran and Pakistan into the front line of political warfare. The historic, cultural, and religious differences between the two sides seem simply too insurmountable to overcome, and so a bloody (and potentially radioactive) conclusion seems a terrifyingly possible outcome.

Yet, in the midst of all the hatred and mistrust, there is a glimmer of hope on the diplomatic front that has come in the form of a collaborative scientific project. Sesame, which stands for 'Synchrotron-light for Experimental Science and Applications in the Middle East', is a multi-million dollar particle accelerator currently under construction near Amman, Jordan. Synchrotrons are fantastically useful facilities capable of producing a form of light known as synchrotron radiation that can be harnessed to investigate materials on unbelievably tiny scales. One such application is the study of proteins and other biological molecules down to atomic scales such that their structure and function can be better understood, and potentially so that we can develop more sophisticated drugs to target them. I recently wrote about the 2012 Nobel Prize for Chemistry awarded to Robert Lefkowitz and Brian Kobilka, which would have been entirely impossible without facilities such as Sesame.

Monday, November 12, 2012

Last week I eagerly sat down to watch the first episode of a new series: Dara O Briain's Science Club. For those of you from outside the British Isles, Dara O Briain is an Irish comedian who, in recent years, has become one of the most popular comedians and broadcasters in the UK. Not only is he a very funny guy, he's also got a pretty sharp mind inside his (frankly massive) shiny head: he studied mathematics and theoretical physics at University College Dublin and has managed to hang on to his love of science despite moving into the world of entertainment. He, along with other big names like Brian Cox and James May, has been instrumental in advancing British popular science broadcasting in the last decade and has presented a number of science programmes, such as School of Hard Sums and Stargazing Live, giving science that much-needed welcoming and friendly face.

His new series is most definitely worth watching and I await the next episodes with bated breath. The first was on the subject of genetics and epigenetics and my curiosity was more about how these complex topics would be presented rather than actually learning something (I'm already fairly familiar with the fields)! I was delighted by the casual and approachable way in which it was structured, and how debates about scientific funding and application were mixed in with the hard facts.

Possibly my favourite moment, however, was when we were shown how to perform a simple task that I am very used to doing in the lab, but in your own home: extracting DNA. Perhaps appropriately given the latest addition to the James Bond franchise, this entailed the use of cocktail-making equipment and the kind of very strong vodka needed to make that perfect Martini. Some might think of this as a big gimmicky and irrelevant, but I quite like the idea of making somewhat abstract scientific principles more tangible in the mind of the general public. Bringing such a standard research procedure into people's homes helps to demystify the scientific method and hopefully give people a greater sense of ownership over this work than they might otherwise have.

So, today's post is a shameless plug for Dara O Briain's Science Club with the aforementioned DNA recipe thrown in for those of you unfortunate enough not to be able to watch it online! Enjoy.

1. Collect some of your cheek cells by swishing some (around 100ml) salt water around your mouth for 30 seconds or so. The solution will be a bit cloudy afterwards.2. Add a few drops of washing-up liquid (to dissolve the cells' membranes) and a shot of pineapple juice (the proteases in this will degrade the myriad proteins found in your cells). Pop it all into a cocktail shaker and give it your best shake!3. Pour through a cocktail strainer to remove bubbles, ideally into a martini glass or something in which it's easy to layer different liquids.4. Chill some very strong (>80% abv) vodka on ice and they carefully layer over the top of your mushed up cell solution. At such a high concentration of alcohol DNA comes out of solution and so precipitates at the boundary between the two solutions. This looks like a white cloud forming at the bottom of the vodka layer, which can be scooped out by wrapping it around a toothpick or something similar. Et voilà! It may not look like much, but you have successfully extracted the chemical instructions that make you you. Not bad for 5 minutes work.

Tuesday, November 6, 2012

A long time ago I posted about an online game called FoldIt. After I made that post various other examples of crossovers between video games and science have been brought to my attention. Many of these, I have linked to from The Trenches of Discovery's Facebook and Google+ pages and some I've been saving for a rainy day. (Note, we often post links/comments to those pages when we don't consider them worth a whole blog post. If you read this blog, but aren't following either of those pages, you're missing out on some interesting stuff. You should remedy this.)

Well, it isn't rainy, but it is foggy, so the day has as good as come. What follows is a run through of some of the various science/video game crossovers I'm aware of. Some of these are neat video games, designed to either teach an aspect of science, or try to give a phenomenological experience of what that aspect of science means for the world. Others are more like FoldIt, they are puzzle games that you try to solve, and in the process you're actually helping the scientists solve their research problems.

Tuesday, October 30, 2012

[The following is a guest post from Simon Thwaite. Simon recently completed his doctorate in the subdepartment of Atomic & Laser Physics at the University of Oxford. He is currently in the limbo that lies between the submission of a doctoral thesis and its examination, and is looking forward to taking up a postdoctoral research fellowship in the Theoretical Nanophysics group at the Ludwig Maximilian University, Munich, from January 2013.In Part 1 of this post he discusses the foundations of the field of atomic, molecular, and optical physics, and describes the process of laser cooling, an experimental technique for cooling atoms to extremely low temperatures. This technique forms the foundation for many of the current experiments in the field.In Part 2 of this post he describes the experiments carried out by Haroche and Wineland, and discusses the possible applications and future directions of their work.]

Those with their finger on the physics pulse will have seen that the 2012 Nobel Prize in Physics was recently awarded jointly to Serge Haroche and David J. Wineland for their development of "ground-breaking experimental methods that enable measuring and manipulation of individual quantum systems". This announcement raises a number of questions for physicists and physics followers alike: what is meant by an ‘individual quantum system’, and why would anyone want to measure and manipulate such a thing? What kind of experiments do Haroche and Wineland do, and what new scientific and technological possibilities does their research unlock? And -- last but not least -- will Prof. Wineland be involved in the imminent month of Movember? Because if he is, everyone else might as well just go home right now.

Atomic, molecular, and optical physics: a brief history

The research of Haroche and Wineland falls within the field of atomic, molecular, and optical (AMO) physics, which studies how particles of matter (atoms, ions, and molecules) interact both with one another and with particles of light (photons: see figure), and how these interactions can be controlled and exploited to engineer systems of particular scientific or technological interest. AMO physics is currently a highly active and dynamic area of research, with applications which range from questions of fundamental scientific interest (e.g. is it possible that the fine structure constant is actually changing slowly with time?) through to real-world technologies (e.g. the development of ultra-precise atomic clocks for the definition of universal time and frequency standards). It has also enjoyed somewhat of a Golden Age in recent years, with the 2012 Nobel Prize in Physics being the third in the last 15 years (after 1997 and 2001) to be awarded for work in the field.

The physical theory that governs the world in the ultra-small regime of atoms, molecules, and photons is the theory of quantum mechanics, which describes both the behavior of these individual quantum systems and the way in which they interact. The roots of AMO physics can thus be traced back to the early part of the 20th century, when quantum mechanics was developed in the course of the search for a better understanding of such phenomena as the radiation emitted by hot objects and the internal structure of atoms.

A rough sketch of an atom (not actual size).

The electrons orbiting the nucleus are only

permitted to occupy a certain discrete set of

energy levels.

Building on the theory of quantum mechanics, the state of atomic physics improved at a breathtaking pace throughout the first half of the 20th century. In contrast, research into the ‘optical’ part of AMO physics progressed at a more sedate pace. While much of the required theoretical knowledge already existed – the wave theory of light and the electronic structure of atoms both being well-understood by this point – rapid progress in any field requires interaction between theory and experiment, and the absence of any technology that could produce a focused, powerful, and wavelength-specific (i.e. single-colour) source of light severely restricted the sophistication of possible atom-light experiments.

This state of affairs changed drastically with the invention of the laser in the early 1960s. Developing out of radar and microwave research carried out during the Second World War at Bell Laboratories, lasers provided a light that was radically different from anything seen before: in addition to containing only a single pure wavelength, laser light is well collimated (i.e. forms a well-defined beam) and can easily be millions of times more intense than any other light source. At a stroke, the door was opened to a whole range of possible new atom-light experiments, ushering in a new era in the discipline of atomic, molecular and optical physics.

Laser light can be viewed as either a travelling electromagnetic wave (left) or a stream of photons (right).

Laser cooling: the beginning of the Golden Age

One particularly striking demonstration of the possibilities that laser light provides for controlling and manipulating atoms has been the development of laser cooling: using tightly-focused beams of laser light to slow down, or cool, a collection of atoms in a gas. Developed throughout the 1980s and recognized with the 1997 Nobel Prize in Physics, laser cooling is today ubiquitous in a wide range of AMO physics experiments, and forms the foundation for the fertile subfield of ultracold atoms.

But you thought lasers could only heat things up, or burn holes in them? Then read on.

The atoms in a gas at room temperature move about very rapidly (their speed depends on the temperature of the gas, but in any case is of the order of several hundred metres per second). Now imagine that you’re an experimental physicist, and your goal is to manipulate and interact with these atoms in some kind of precise, controlled way -- for example, you might want to carry out some spectroscopy on them in order to measure the exact frequencies of light that this atomic species absorbs and emits. In this case, working with a ‘hot’ gas of rapidly-moving atoms is far from ideal -- in fact, it’s a complete disaster. Since the atoms are moving reasonably quickly, the radiation they absorb and emit is subject to a significant Doppler shift, making precise frequency measurements impossible. Further, the atoms collide both with one another and with the walls of their container, and these collisions lead to an additional ‘smearing out’ of the frequencies emitted or absorbed by each atom.

These problems could be largely nullified if only the atoms in the gas could be slowed down, or even brought to a complete stop. The great discovery of the 1980s and early 1990s was that this can be achieved by using laser light of a carefully-selected frequency to manipulate the atoms in the gas. Like many of the best achievements in science, the basic idea is both simple and elegant: a rapidly-moving atom is gradually slowed down by bouncing a stream of photons off it, one after another. Although each photon takes only a small amount of momentum away from the atom, the absorption and re-emission of a photon takes place in less than a microsecond, so that a single atom can scatter over a million photons every second. Consequently, an atom can be slowed down from a speed of several hundred metres per second (corresponding to room temperature) to a near-complete standstill in only a few thousandths of a second. These laser-cooled atoms -- which are at a far lower temperature than anything found in nature, even in the deepest depths of outer space -- can now be measured, probed, and further manipulated with an extremely high degree of accuracy.

The process of laser cooling: by arranging a set of lasers such that they remove momentum from the rapidly-moving atoms in a room-temperature gas, clouds of up to a few tens of millions of atoms can be cooled down to temperatures of less than a millionth of a degree above absolute zero .

Together with the invention of related techniques for trapping clouds of laser-cooled atoms using combinations of laser light and magnetic fields, the development of laser cooling stimulated a frenzy of new activity in atomic, molecular, and optical physics. Since the early 1990s, the level of experimental control in cold-atom experiments has progressed to the point where it is now routine to isolate, trap, and cool either individual atoms, or clouds of up to a few tens of millions of atoms, to temperatures of a few hundreds of nanoKelvin (billionths of a degree above absolute zero) in a controlled and repeatable fashion. These new experimental capabilities have found applications in a diverse range of topics, which span all facets of atomic, molecular, and optical physics. Two such topics in which laser cooling plays an integral role – namely, the interaction of laser-cooled atoms with light trapped between two very small mirrors, and the interaction of light with laser-cooled ions trapped by rapidly-oscillating electric fields – are those in which the Nobel-winning research of Serge Haroche and David Wineland lies.

Monday, October 22, 2012

Have you ever stopped to consider what makes you a single organism? It might sound strange, but it is a question dripping in biological significance. You may think of yourself as a neatly packaged single unit, yet you are probably also aware that this one unit is made up of tiny individual cells working together. But how do 75,000,000,000,000-odd cells cooperate with such precision? Why your cells work together is entirely separate question with answers to do with filling evolutionary niches and delegating functional roles; I'm talking about how they do it. That's the million dollar question!

Well, to be precise, it's the $1.2 million (or 8 million Swedish Krona) question, as it was announced last week that this year's Nobel Prize in Chemistry (and the accompanying monetary reward) has been awarded to Robert Lefkowitz and Brian Kobilka for their outstanding work into the biology of G protein-coupled receptors (GPCRs). In this post I hope to give you an understanding of how GPCRs work, why they're important enough to deserve a Nobel Prize, and how they relate the question of how you stay as just one you.

GPCRs - the eyes and ears of the cell

If you were a cell, how would you know what to do? When should you divide, where should you move, what should you make? You couldn't just do it randomly or to some pre-determined schedule because the human you're in is unpredictable and its cells must be flexible in their behaviour to match that. So, what you really need to make these decisions is information. This is exactly the same as how humans make decisions about our behaviour, we gather information about the surrounding environment through our senses and then act appropriately. A cell that receives no information from outside its own membrane is as impotent as a human with no sense of sight, smell, touch, taste or hearing. Ok, so they need information, how do they get it? As you've probably guessed given my snappy subtitle and general build-up, the answer is receptors!

One out of every 8,000 humans is born with some of their internal organs on the wrong side of their body, a condition which can have serious medical consequences. Although we're usually described as symmetric, that's only superficially true. Like other vertebrates, we look symmetric from the outside but our internal organs show left-right asymmetry; unless you happen to be a Time Lord, you have only a single heart which is normally located on the left side of your chest. Changes to the organization of the internal organs can lead to cardiac defects, misalignment of the bowel and other serious problems. Many genes are known to play a role in establishing this asymmetry, but we still don't fully understand its evolutionary and developmental origins. Earlier this year, a paper published in the journal PNAS described how this asymmetry is established by subcellular components early in embryonic development.

Experiments with plants have already shown that subcellular structures can have an effect on macroscopic organs. Cells are highly organized, dynamic, complex living things, more kin to a vast city than to a sack of fluid. The cytoskeleton is an important part of this structure and plays a critical role in many processes, including determining the shape of the cell and acting as a transportation network, much like a road and rail network in a city. The cytoskeleton is a network of different kinds of filaments and microtubules, the roads and rails themselves, which in turn are built out of the proteins actin and tubulin. A decade ago, scientists discovered that a mutation in one of these building blocks, tubulin, could have far-reaching effects in plants. The mutated tubulin changes the shape of the cytoskeleton, twisting it; this changes the shape of the cells, which leads to flowers and other organs being twisted in turn.

Tubulins are a basic component of the machinery of life, found in every kind of cell. Based on the belief that left-right asymmetry is a consequence of subcellular structures, a team of scientists led by Michael Levin at Tufts University in Massachusetts decided to investigate the role of tubulin in establishing this asymmetry. They injected embryos of the frog Xenopus laevis with mutated forms of two tubulin related genes, Tubgcp2 and Tuba4, and followed the developing embryos to find out how frequently the internal organs were located on the wrong side of the body, a condition known as heterotaxia. About one quarter of the injected embryos were heterotactic, with half of those showing abnormalities in at least two organs. Amazingly, this was only true if the embryos were injected when they were still only a single cell; embryos that had already divided into two or four cells weren't affected by the mutated tubulin. Whatever the mechanism involved may be, tubulin is clearly critical to a very early decision in the embryo which has long term effects on the positioning of internal organs.

The researchers weren't content to simply show that tubulin has a role in establishing internal asymmetry; they also wanted to explore how it might be accomplishing this. One possibility is that changes to tubulin alter the structure of the microtubules which affects transport within the cell. The cytoskeleton is known to be biased towards the right half of the frog embryo, leading certain molecular motors and their cargo to be preferentially transported to that side. This rightwards bias disappeared in the mutant embryos, supporting the idea that the mutated tubulin somehow disrupts the regular pattern of transport. The researchers also used the mutant embryos as a tool to fish out a whole suite of maternal factors that depend on tubulin in order to be localized to one side of the embryo, including cytoskeletal and transport-related proteins which can form the basis of future research into how this asymmetry is maintained and propagated.

Finally, the team co-operated with scientists at the University of Illinois and Cincinnati Children's Hospital Research Foundation to verify that the same process takes place in other organisms. They found that introducing mutations in tubulin led to changes in left-right asymmetry in both human cell cultures and the nematode Caenorhabditis elegans. Since tubulin seems to play a similar role in establishing asymmetry in frogs, nematodes and humans, the authors are confident in asserting that this is an ancient and conserved mechanism of left-right patterning.

In addition to its implications for an important class of human birth defects, this is a thrilling developmental story. It's quite amazing to see how changes in subcellular components can propagate up through cells, tissues and organs to have an effect on the overall layout of the organism itself. While the authors describe this as a mechanism which has been conserved during evolution, I think it may also be a common physical principle which different groups have taken advantage of. Whatever its evolutionary origins, it's a mechanism which I find profoundly beautiful. The idea that the orientation of the cytoskeleton is amplified by changes in subcellular transport to have major developmental and physiological consequences is so elegant that I can't help but revel in it. This is the kind of story that makes me fall in love with science all over again. The world may be fabulously rich and complex, but sometimes the explanation can be sublime in its simplicity.

Monday, October 8, 2012

The picture is from Facebook: "The
wonderful team at the Malaghan took some time out today to show Ruba
and Rose (who generously donated their pocket money to the institute)
around. Here Ruba looks at white blood cells through a very impressive
microscope as sister Rose looks on."My current role at a university art gallery should imply some kind of practical art and science cross over. After all, the scientists at the Malaghan - literally down the road from the gallery - are researchers with as much stake in the cultural life of the campus as the musicians, artists, film historians, and poets I talk to daily. Theoretically we understand this at the gallery. But we have to take a different approach to collaboration if we really want a more dynamic back and forth between the research we frame in the white cube and the research that is framed in the lab.What is that approach? Did you visit the art gallery associated with the university you trained at, or work at now? Why not? What would it take to create that relationship?

How do muscles work?

Physics students learn the definition of work and mechanical energy in the first course on classical mechanics. Mechanical work is performed when a test body is moved against a force, and the work performed is equal to the (vectorial) product of the distance covered times the force, if it stays constant, otherwise you would have to evaluate an integral. If the test particle is stationary, no work is performed. But what what happens when you hold a heavy object with your arm? Even if you don't move the arm and don't perform any work from a physical point of view, it proves exhausting and after a while the arm starts aching. You at least have the feeling of having performed work, although this contradicts the physical definition of work.

Mechanical vs. molecular engines

What's wrong here? Are muscles different compared to engines? Clearly, there's a contradiction. It turns out that this example can be explained via the mechanism of molecular engines, which work very differently compared to the mechanical engines we're familiar with. And one needs to understand a bit of non-equilibrium thermodynamics!

Actin and myosin proteines

Muscles consist of two proteins called actin and myosin. Actin is in fact a very old "invention" of Nature, it is almost identical in yeast and in humans, and serves the purpose of cytokinesis, i.e. the separation of cells as well as locomotion. It consists of amino-acids and has a helical shape. Myosin is a protein that is able to change its shape under the influence of adenosin-triphosphate (ATP). It resembles a q-tip with a head that can carry out a nodding movement. The energy for changes to myosin's shape is provided by consuming an ATP molecule and dissociating it into adenosin-diphosphate (ADP). Fresh ATP is generated in mitochondria, which are small cellular organelles, by oxidation of sugars.

What follows are some reflections on the scientific bits and pieces people presented at the conference that I happened to find interesting. It might be a bit technical, but please ask questions if I use jargon you don't understand. Also, if you're an expert and I write something you want to comment on, please do (especially if something I write is misleading or just plain wrong).

The topics I've chosen below just happen to be what I found memorable. I made no attempt to choose these topics by any sort of theme. I apologise if I've missed anything particularly interesting. Perhaps if you were there and think I missed out something interesting you can either mention it in the comments or write a guest post for us.

Neutrinos and precision cosmology

One of the first images captured by the Dark Energy Survey. The more interesting images it will take will be of very distant galaxies and won't look anywhere near as nice. This one is just for people to put in their blogs.

Jan Hamann gave a talk on the future constraints that cosmology will provide for neutrino physics. I was pleasantly surprised by the power of large scale structure probes, such as Euclid.

We know from particle physics experiments that the difference between the masses of two of the neutrinos is more than 0.06 electron volts. This means that the heaviest neutrino must be heavier than 0.06 electron volts.

Monday, September 17, 2012

Some of you may already be familiar with the 'Symphony of Science' YouTube series, but I thought I'd give it a plug here anyway. These are a collection of videos created from various documentaries and science programmes spliced together and auto-tuned to make a song on a specific area of science. I'll be honest, the 'song' aspect of it is not the greatest musical contribution that mankind's ever seen, but I still really like these videos because they encapsulate an entire idea in a way that is so commonly used to express other concepts but is so rarely applied to science. The topics covered range from evolution, to space travel, to quantum mechanics, and are all well worth a watch.

In recent years I've felt a slowly increasing passion among the general population (in the UK, at least) for good scientific broadcasting and innovative ways to present scientific principles. This is exemplified by the notoriety of a number of 'popular' scientists, such as Brian Cox or David Attenborough, and the fact that several celebrities that started out in general entertainment have now moved in the direction of scientific broadcasting, Dara O Briain and James May to name but two. Perhaps we here on this sceptred isle have been spoiled by the exemplary science programmes regularly released by the BBC, but I believe that innovative presentations of science have the capability to connect with people regardless of nationality. The Symphony of Science is just one such presentation and, even if it's not your cup of tea, it is undoubtedly innovative.

The latest installation in the series was released this week and covers the ever-topical issue of climate change.

Monday, September 10, 2012

It's been an exciting week for molecular biologists, and should have been for everyone else too! This week, the Encyclopaedia of DNA Elements programme has revealed its first results about the role of the 99% of the human genome that has, until now, represented a fairly sizeable gap in our understanding of how DNA works. This has made big waves in biological circles and has to some extent penetrated the mainstream media, on the BBC for example, but I thought I'd herald this great work by giving you a brief explanation of what DNA is, how it works, and why so much of it was a bit of mystery until now.

From humble beginnings

DNA is unbelievably complex yet unbelievably simple at the same time. The principles upon which it is based are extremely simple: a string of code made up of four chemical units (called nucleotides: G, C, A and T) on two intertwined strands where units on opposite strands are paired either G:C or A:T. The complexity that arises from such a basic principle emerges much in the same way that vastly complicated computer programs can emerge from the binary 1 and 0 system of computer code; principally, that it is a code there to store information that, when read correctly, is vast. And when you have around 3 billion units of this code in every cell in your body, that vastness can quickly become unfathomable!

Nonetheless, we've come a hell of a long way in the last 60 years. It was only in 1952 that the Hershey-Chase experiment conclusively demonstrated that DNA and not protein, as had also been suggested, was the information-carrier of the cell. Just a year later Watson, Crick, and Franklin discovered the now famous 'double-helix' structure of DNA, and the race was well and truly under way to decipher this mysterious molecule.

Monday, September 3, 2012

[Note from Shaun: When Mikko wrote us a guest post about the Higgs discovery he also gave me a short note he had written on the day CMS first opened their 2012 box and looked at the Higgs-relevant data. I decided to save that note for a rainy day. Today, is that rainy day (literally, in Helsinki). What follows is more or less exactly what Mikko wrote down the evening that he and about 100 other people first learned that they really had discovered an entirely new fundamental (probably) particle. The rest of us couldn't be there in that room, but we can read about it now!]

**** Do not open before July 9 *****

Recollections of a Higgs discovery

It's not official yet, and will not be for another three weeks, but you could say Higgs was finally discovered today, on Friday, 15th of June, 2012. More than fifty years of searching, and there it is, at 125 GeV, just like the first hints last December indicated.

The big occasion was the unblinding of the 2012 data set at a Higgs meeting held at CERN at 15:00 hours on Friday evening. The meeting venue, the non-descript Building 222 better known as the Filtration Plant, was stacked with CMS physicists, with half of the crowd sitting on the floor or leaning against the back wall. The air was dense from expectation, and immensely hot from the mass of people and failing ventilation.

Everybody was appropriately informed of the formal proceedings of the day: the slides would not be posted on the web, no recordings of the video meeting would be allowed (except an official one by the CMS Outreach Team), and nothing shall be leaked outside the collaboration after the meeting. Only the highest level of CMS management had seen all the results before, at a special preview held at 11am in the morning.

For the uninitiated, I should probably explain what the unblinding is all about. Scientists are intimately aware of unconcious biases in analysis, when the stakes are high and the statistics are low. Therefore, it is considered good practice to not look into the signal region before fixing the analysis procedure and cuts. The expected background in the signal region is estimated using side bands, and the analysis only proceeds to look in the signal region, the "box", when those side bands are found to be sufficiently well understood.

The Higgs group had agreed that nobody would look into the signal region of 2012 data before today (or yesterday evening really, to allow some time for analyzers to prepare their talks). The previous week was spent by review committees scrutinizing each of the analyses and making sure all the systematics were thought of and no obvious mistakes would remain. Only the analyses given official green light would be allowed to open the box, and the whole collaboration was invited to join the event.

A significant fraction of the three thousand collaborators apparently did indeed join, most of them remotely. From the first few minutes it was clear that the video meeting system was creaking and was barely holding the traffic. The outside world could hear the audio, and we could hear some of them (despite frequent reminders to mute), but the video feed was apparently stalling. With no slides posted, the people in the videoland were more or less blind.

All the more reason to feel privileged to be at CERN to listen to the talks in person.

The first three talks were strategically ordered to go from the channel with the worst mass resolution and lowest expected sensitivity to the one with best resolution and expected sensitivity. The HWW (Higgs decaying into two W bosons) analysis got the honor to be the first messenger.

After a bit of a jumpy start with switching lights on and off for better contrast on the video projector, trying to transmit slides outside CERN and accidentally dropping the network connection, the talk finally got up to speed. Several slides showing impressive agreement between data and simulation covered the sidebands before moving on to signal regions, with quite visible excesses. The bottom line: a little more than a three sigma excess with combined 2011(5/fb)+2012(3/fb) data, precisely in agreement with the standard model expectation for a 125 GeV Higgs. Hey, this starts to look quite promising!

After a few more minutes of more and less technical questions from the collaborators we turned to the Hgammagamma (Higgs decaying into two photons) channel. The talk was given by a young Chinese graduate student from MIT, who'd obviously absorbed the American style of putting a bit of drama into the talks. With skill she had the collaboration holding their breath waiting to see the new limit plots... with a gigantic peak and a local excess of more than 4 sigma at 125 GeV when combined with 2011 data.

At that point I had to fight a bit breaking into tears. Those two channels alone meant that we'd have to be above the 5 sigma discovery limit already. It would mean we had discovered the Higgs. After 50 years of searching. Us, here.

Ok, back to sobering up a bit. The signal was much stronger than expected from standard model, which means we had either got very lucky, or that this could be a non-standard-model Higgs. All the better, we might have more to discover later in the year. The measurements from different subcategories of photons pairs and from 2011 and 2012 looked all perfectly consistent so there was no hint of a measurement error.

The last of the big three talks was ZZ4l (Higgs decaying into two Z bosons, which in turn decay into four leptons). This is the ultimate channel with very little background so you could even claim with good probability that some individual events are from a Higgs boson decay, unlike in the background dominated HWW and Hgammagamma channels. The expectations were already high from the two previous talks, and the results certainly did not disappoint. Around half a dozen nicely clustered events right at 125 GeV, just like the standard model predicted.

It's interesting to note that improvements to the analysis, like Particle Flow based lepton isolation and recovery of photons radiated off the Z bosons, had both improved sensitivity and caused the secondary peak seen at 119 GeV in 2011 to disappear. The updated results combined with 2012 statistics made a very convincing case, racking up another 3 sigma or so.

The main trio was followed by a fourth talk on VH (Higgs produced in association with a vector boson, i.e. Z or W), which however had not yet been granted green light to open the box. Nevertheless, the analysts had made nice improvements to the analysis, gaining 50% more sensitivity out of the 2011 data, and showing a small excess consistent with standard model Higgs. A planned fifth talk on Higgs decaying into two tau leptons was postponed pending more checks, as was appropriate. The background checks before opening the box were clearly taken seriously.

Overall it was quite a tour-de-force, with all channels lining up in unison. This is still not all, because the analyses used only the first 3.9/fb of 2012 data collected until June 8, and in most cases even less. With 5.6/fb already in the can today and three more days to go to reach above 6/fb, the analyses will likely have about 50% more integrated luminosity for ICHEP. This might be enough to take some channels already above 5 sigma by themselves.

In the short summary the Higgs conveners reminded everybody that this is really a result by everybody in the collaboration, not just the Higgs group: thousands of people had contributed in building, maintaining and running the detectors, writing reconstruction software, calibrating the detectors, checking the data etc. The final analysis was only the tip of a large iceberg. And the work was not yet over, there was still plenty to do before presenting the results in Melbourne, Australia on July 9.

A final warning was given before people departed the room: smiles should be subdued and no champagne bottles should be popped in the cafeteria; there were filming crews outside that had not been allowed in the meeting room, and they had vowed to film the expressions on the people as they came out. We should not let the world know just yet ;)

Monday, August 27, 2012

Three six-week slots ago I was writing from Berlin in the midst of a long-ish research trip. I feel like I've been moving constantly ever since. When I left for Europe I'd just accepted a new job in another corner of the world, and that month on the road was partly spent getting used to the idea of a big transition.

Returning to Chicago in early May, I had 0% of the move organized. I hadn't even told a lot of people about it! By the middle of July I was entirely packed down (well, kinda...), my deconstructed apartment was on a truck somewhere in the Midwest, hurtling its way to the Pacific in the form of a boat docked off the coast of L.A. I was saying a lot of farewells.

Right now, I'm juggling a new role and the inevitable, mundane logistics of repatriation with the process of finishing my dissertation write-up. It's quite a balancing act, but it's also the kind of spatial and life transition that — varying the specifics — young researchers have to do almost routinely. Shaun (who very kindly helped me out six weeks ago, mid-move) has had his own intense post-PhD experience, and James will be finishing soon.

Realistically, I won't be able to be a ubiquitous presence on the blog in the months to come. But I'm fundamentally committed to the idea that it represents, so that rather than bowing out for a period I'd rather attempt to maintain a lighter presence here by posting snippets that might illuminate the transition from graduate school to professional life — which is a fundamental, even foundational aspect of academic life. Over the last months lots of people have empathized by
telling me about their own post-PhD transition, whether it was undertaken 3
years or 3 decades ago. So I thought I'd use my own experience as a point of departure
for a conversation and some fact-sharing about the joys and sorrows of academic and extra-academic
nomadism.

In actual fact, I suspect that the juggling act of the coming months will be good for my work and writing in calling for a whole new level of discipline, and so intellectual and argumentative succinctness. But we'll see. Use the comments tool to ask me questions and I'll attempt to respond. And please, share the details about your own crazy move.

Monday, August 20, 2012

The adventure of NASA's Mars Curiosity Rover took an exciting step forward today as the pioneering little machine vaporised its first rock with its cheerfully named 'ChemCam' laser. The Curiosity mission has tapped into a huge vein of public enthusiasm for investigation and the exploration of the unknown, exemplified by the fact that over a thousand people gathered in New York's Times Square to watch the live landing. Space exploration has often occupied a romantic place in the heart of public opinion, in part because of the wonderful images that can be beamed back, which can offer a more personal connection to the work being done and give people a greater sense of ownership over scientific endeavour. The Curiosity mission is no exception - NASA is dutifully publishing the images being sent back from Mars to the delight of those of us on Earth.

The potential for images to capture the public imagination has not gone unnoticed by other branches of science. Last year I published a post about the Cell Picture Show, a project by the biology journal Cell to highlight the most striking images emerging from the ever expanding field of biological microscopy. In this post I wanted to highlight a recent edition to the project: the super-resolution gallery.

Super-resoltion is a fairly recent step forward in microscopy that allows biologists to observe life's molecular events on an unprecedentedly small scale using a range of cunning technical tricks. Researchers can now follow individual molecules as they move across the surface of a cell, or observe the machinery of processes like DNA replication in fine detail. Just like Curiosity, the missions of super-resolution are pushing back the frontiers of knowledge and exploring the unknown; not going further, but looking smaller. I would definitely recommend giving this new gallery a look: here.

Image is property of Cell and National Institute of Health - microtubules imaged by conventional (right) and super-resolution (left) microscopy within a Drosophila cell.

Monday, August 13, 2012

Let me be honest, as I write this I am heavily distracted by highlights (and now the closing ceremony) of the Olympics and as such this post won't be quite as substantial as I'd originally intended. If you're not a fan of the Olympics, then you've either never tried to be a fan of the Olympics, or you have no empathy. The point of the Olympics is not that being faster, higher or stronger actually matters, because it doesn't. Of course it doesn't. Nothing changes because of who won what in which event. The point of the Olympics is that despite the overwhelming and terrifying arbitrariness of human existence a group of people have decided to passionately care about something.

The worst sin in life is not to live. Nobody can claim that any Olympian is not living. The passion (and the story) of each and every competitor, is why the Olympics is so enjoyable to watch. And, given the arbitrariness of life, why not passionately strive to be faster, higher or stronger?

But, if faster, higher or stronger isn't for you, pick your own mountains to climb and climb them instead. Then, while you climb, share your journey with the world. And enjoy it when, as has been the case for the last two weeks, the rest of the world shares their journeys with you.

Just make sure you're climbing something!

The Spirit of Exploration

The mountains those of us at this blog are climbing (most of the time) are the Twin Peaks of discovery and understanding. Rather than jumping a little higher or running a little faster we hope to see a little further. Our goal is to explore the unknown wildernesses of existence.

Monday, August 6, 2012

[Note from Shaun: The following is a guest post from Claudia Mignone. As you will learn from the post, Claudia is a scientist turned science writer and she shares below her thoughts on the divide between scientists and science journalism. Her past lives on both sides of this divide allow her to also see both perspectives of a world that can sometimes descend into acrimony. Enjoy... (all credit/blame for the image captions is my own to bear)]

Heidelberg! A university here sometimes awards PhD's to starving cosmologists. (Photo by Claudia)

I am an astronomer/cosmologist by training, and have been happily working as a science writer for almost three years now. In this post, I will explore the borderline that divides the people in the “trenches”, who are actively conducting research and producing scientific knowledge and results (what we like to call the “scientific community”, whatever the term really means), and everyone else who has an interest in the outcome of such research (let's call them “the public”). The borderline is quite an interesting grey area. Its width may vary significantly and continuously on the basis of a large number of factors and it remains partly unexplored by many. As someone who has spent some time working on both sides of this blurred region, I thought I'd share some thoughts that might be useful, particularly to the folks who are still locked in the “trenches”.

Before I start doing so though, let me just add some sort of disclaimer: I don't mean to dispense any sort of “wisdom” here. All of my thoughts and observations are based on my own personal experience (plus that of many colleagues/fellow scientists that I've encountered along the way) but are by no means of a general nature. It is very well possible that others have gone through quite different paths and might disagree with the view that I developed along mine. I haven't conducted any study (neither thorough or superficial) on any of the subjects I'll mention – although I'd love to do so in the future! – so I won't draw any conclusions, because I haven't reached any – yet. But maybe I'll try to propose some advice here and there. Feel free to take them. Or not.

Friday, August 3, 2012

Today is The Trenches of Discovery's first birthday. It's been a pretty cool year. It hasn't exceeded my wildest expectations, but it has exceeded my worst fears. We're chugging along nicely, with growth figures that, so far, match a pretty regular exponential growth. At the moment our viewing figures are still relatively humble, but if they keep following this exponential curve then, in about five years, the entire world will be checking out The Trenches of Discovery every day (I just made that up, I can't be bothered working it out in detail – but it's close enough). After that, I guess the governments of the world will have to organise successive baby-booms to keep our viewing stats rising.

Tell us about yourself

I thought, to mark the occasion, I would do two things. Firstly, I'll do something I've seen some other blogs do, which is to ask you, the reader, to tell us about yourself in the comments. What's your background? Where are you from? How did you find this place? What sort of posts of ours do you like? What is your favourite colour? What is your quest? What is the air-speed velocity of an unladen swallow? The usual. I'd love to know who the dudes out there are that are reading what we write. The comments are still completely unrestricted, so you can post as anonymously as you wish. Feel free to ask us questions in the comments too.

Trench Points

The second thing I want to do is have a small competition. I will give 20 Trench Points to the first commenter who correctly guesses what our most viewed post is. Everyone else who correctly guesses will get 5 Trench Points and as a consolation I'll give 1 Trench Point to anyone who's guess is one of our five most viewed posts. Then, for a super-bonus, if people want to guess how many views the most viewed post has I will give another 20 Trench Points to the closest guess. I'll announce the results this time next week.

What are Trench Points?

In case you're wondering I don't know what Trench Points are either. But, I'm sure that in five years time when we rule the world, they'll matter, so you better start collecting them now. More realistically, I guess, when we sell out and get merchandise they'll be able to be exchanged for free Trench Goodness. Clearly, this will only be true for people who don't post anonymously and use some sort of registered account. In the meantime, while we don't rule the internet, or have Goodness to give away, Trench Points will have to just be about inner feelings of awesomeness and achievement in life, as well as, of course, bragging rights with all your friends.

For anyone who does post with some sort of registered account (i.e. gmail address, Wordpress account, etc), I will keep a record of accumulated Trench Points over time and put them in a table somewhere.

Monday, July 30, 2012

You may have noticed, but we're holding a little sporting shindig here in London over the next two weeks that's got everyone rather excited. I myself am going to be spending a lot of time shuttling back and forth between the Olympic park and my house in Oxford and most of the rest of the time glued to my laptop watching as many sports as is humanly possible! Thanks to this busy sporting schedule, this week's post will be somewhat shorter than some of my others in the past, but I hope you still find it interesting. You may think, dear reader, that I will be shirking my scientific duties by devoting myself so fully to the Olympic smorgasbord but my enthusiasm is born out of pure biochemical curiosity and the sporting element is, I can assure you, wholly secondary!

How does biochemistry fit into the greatest show on Earth, you may ask? How does it not, I would respond! All of the athletes competing in this year's games have spend years training to improve their body's biochemical response to stress and physical exertion in order to fulfil the Olympic ideal of 'faster, higher, stronger'. In my last post of this series I described the molecular processes that allow muscle contraction and in the preceding post I talked about how energy is processed within your cells to produce the 'energy currency' of your post: ATP. In this post I will bring these two topics together and discuss how energy is regulated in different muscle types and how the biochemical situation varies hugely between the 100m and the marathon.

Monday, July 23, 2012

It was announced last week that an anonymous benefactor is to donate £20 million to be split between the University of Southampton and Cancer Research UK (CRUK), which will conduct its work in the new Francis Crick Institute, currently under construction in London. This money is going to be used primarily in the advancement of cancer immunotherapies, a branch of cancer treatments in which the patient's immune system is harnessed to fight cancerous cells. I wrote briefly about the use of the immune system in the fight against cancer in a post a few months ago, but since this field is now in the spotlight (in the UK, at least!) I thought I'd give you a short update on the kind of work that's being done and why £20 million is significant.

Targeting the traitors

Cancer, as you're probably aware, is the condition that arises when an individual cell or subpopulation of cells accumulates sufficient mutations in the genes that control cell division as to become rogue entities within the body - replicating indiscriminately and dangerously. As a species we are highly susceptible to cancer because we just live so damn long and generally don't die from the things that kill most other species (hunger, illness, predators etc.), and finding effective treatments remains one of the major goals of medical research. If a cancer forms a tumour and stays like that then treatment is simple surgery to remove it and is highly effective. The killer scenarios are either when tumourous cells metastasise and circulate in the body as individual cells, establishing too many new tumours to be removed; or when the original cell is one that does not form a tumour, as is the case for leukaemia or lymphoma. In these cases, chemotherapy or radiotherapy is commonly used to attack the cancerous cells but often ineffectually and almost always with nasty side-effects. The idea behind cancer immunotherapy is to nudge the immune system into attacking these cancerous cells, thereby clearing the disease with minimal damage to other tissues.

These travels mean she hasn't had time to write something for this week, so I shall fill the gap. I didn't have anything sitting in the hard-drive waiting to be posted. What follows instead is some thoughts on an issue that anyone who knows me in the real world will recognise as something I regularly rant about. One day I will write a more carefully worded (and thought out) post on this issue, but, for now, pseudo-stream of consciousness is what you get.

Why don't more scientists enter politics?

That's not meant to be a rhetorical question. If anyone knows the answer(s), please tell me in the comments.

Scientists (especially physicists) are highly opinionated. We like to tell people our opinions (hencealltheblogs). Even more specifically, many scientists are highly opinionated about politics and we like to tell people our opinions on politics quite often. We lament particular policies the government of the day are implementing. We complain about the conditions set by government funded research agencies, claiming that we know how to do it better. Why don't more of us enter politics?

In many other fields, politics is a well known, accepted career path. Lawyers, journalists, writers, people from the business world and school teachers, just off the top of my head, are all fields where there is a clear path to politics (or at least, many people choose to take that path). However, science is an equally important part of politics. Climate change, nuclear power, the technology industry, tertiary education; all of these things are either a subset of science or are at least heavily dependent on scientists to exist, and all of them are important aspects of the modern political world. Why aren't there scientists in the parliaments around the world helping to make the decisions that impact upon those spheres?

Why aren't more of the ministers of science of the world scientists? One notable exception to this is the physicist and Nobel Laureate Steven Chu who happens to be Obama's secretary of energy. Looking back over past U.S. Secretaries of energy, Chu, and his predecessor Samuel Bodman, are also the exceptions to the rule. I appreciate that another requirement for a science minister is an understanding of law; however science is equally hard to pick up an understanding of as law. I'm not more comfortable with science ministers that are lawyers relying on advice about science than scientists who are receiving advice about law

All of the above makes me wonder. And it isn't like scientists are just choosing to engage with politics in other ways (well, to a small degree they do engage in other ways - but not nearly as much as other fields). Very few (although not zero) join political parties and those that do would very rarely try to take an active role in influencing a party's policy. But, there is another point here that causes this situation to really confuse me. Most people with doctorates in a science subject, don't end up as practising scientists. There just aren't that many jobs in science, especially in pure research jobs, but even in science based industrial jobs. Why isn't some proportion of the people who start out in science, but for one reason or another don't stick with it, not ending up in politics? All those other fields do it.

People I went through both undergraduate and postgraduate study with were well aware of finance and management consulting (etc.) as possible careers should the physics fall through or fail to inspire; why not politics?

If anyone has some thoughts and/or links on this, I'd love to read them. I will write a more thought out and detailed post on this before the end of the year, so any fodder you can give me now would be nice. In the meantime, if you're a scientist with political opinions of any persuasion, go join a party that matches that persuasion, attend their meetings and advocate policies that make scientific sense. And, convince your peers to do the same.