Category: Science

It’s the nightmare scenario: you look back at an old bit of code and realize you’ve made a mistake and, to make matters worse, the paper has already been published. This year I lived that nightmare scenario. I had shared my code only to discover that a variable that should have been reverse scored (which boils down to multiplying the number by -1), wasn’t. It was a minor oversight that I’d made as a 1st year PhD student learning new statistics, I hadn’t caught the mistake until now, and, worse still, the code had been used in two papers I wrote simultaneously. I considered changing my name and hiding but as I had a postdoc and my mother claims to like me, I figured it was better to keep my current identity.

‘…the right decisions don’t come without risk….’

Reaching out to the senior author we knew there was only one solution: We had to redo the statistics and submit corrections. As an early career researcher, I was panicked. What if the results were drastically different, was a retraction (possibly two) in my future? Fear aside, a mistake was made, we had to own it, and if we were going to believe in scientific integrity then we had to show ours. It’s been my experience that the most difficult decisions, the ones that I’m truly afraid to make – those are the decisions I know to be right. But the right decisions don’t come without risk and I can’t pretend that I wasn’t, and continue to be, worried that not everyone would see this as a minor mistake. Science is competitive and the feeling of having to be flawless, particularly at this phase of my career, is a weight. As a woman in science I already have to fight to be taken seriously, to be seen as competent, and I had committed a sin, I had made an honest mistake that had been published, twice. Before I could find out the results of my mistake on my career, I had to find out their impact on my papers.

‘As a woman in science I already have to fight to be taken seriously, to be seen as competent…’

I somehow survived three painful hours while I waited to finish work at my postdoc and could get back to where I kept the study data. Upon sitting at my desk (liquid courage in hand) I redid the stats, anxious to find the results. Now look, I’m no slouch with numbers, I know what multiplying by -1 does to them, but panic overrode sense in that moment and I needed to see to believe. First paper: Flipped the direction of effect on a non-significant variable that remained that way. Okay, fairly minor, just requires that the journal update the tables. Second paper: Again, the only thing that changed was the direction of effect, though this variable had been and still was significant, means we had to adjust the numbers, a line in the abstract, and three sentences in the results. Not great, but as variables go it hadn’t even rated being mentioned in the discussion.

Okay, okay, okay (deep breaths, bit more whisky), this could be so much worse I told myself. I screwed up but hey, everyone makes mistakes, I was learning something new, I should’ve have caught it earlier, but it was caught now. Onto the next step, making the corrections, contacting coauthors, and letting the journals know. Time to really live by our ideals. But first! Another moment of panic while I wondered if I had made the same mistake in my two newest papers. Opening code, reading through, and…no, I hadn’t made the mistake again. Somewhere along the way I had clearly learned how to do these statistics correctly, I just hadn’t caught it while I was working on these two papers and had copy-pasted the code across them. Good news, I am in fact capable of doing things correctly.

‘I had lived my nightmare and it felt, as least in this moment…completely survivable…’

Writing the email to my coauthors wasn’t something that I was particularly looking forward to. “Oh hey fellow researchers that I respect and admire, I screwed up and am going to let the journals and the world know. PS, please don’t think less of me and hate me. Okay, thanks.” While that’s not what I wrote, that’s what it felt like. An admission of imperfection, shame, guilt, a desire to live under a rock. However, I’ve been blessed with caring and understanding collaborators, each of whom was extremely supportive. Next, I sent an email to the journals explaining the mistake and requesting corrections be published. Each journal was understanding and helped us write and publish corrections and that was it, it was done. I had lived my nightmare and it felt, as least in this moment…completely survivable. I had imagined anxiety and panic and battling my own shame and guilt. This…this was a feeling of stillness that I was not prepared for.

Prior to contacting the journals and writing this blog, I asked myself how much this would hurt my career. Would a small mistake cost me my reputation, respect, and future in the science I’d already sacrificed so much for? Would writing this blog and openly speaking to the fact that I had made a mistake only further the potential damage to career and respect? Would a single mistake, done at the beginning of my PhD and not since repeated, mean that others didn’t trust my science and statistics, not want to work with me? Would I trust my own skills, and more importantly, myself, again? There was so much uncertainty and so little information available on this experience, yet mistakes like this must happen more than we think, they just go unspoken.

‘…genuine mistakes? We have to make those acceptable to acknowledge, correct, even retract, and speak about, to learn and move on from.’

This, this is the crux of a problem in science, there are unknown consequences of acknowledging and speaking openly about our mistakes and, by failing to do so, we only further increase the chance that mistakes go uncorrected. Let’s hold those that perform purposeful scientific misconduct accountable, but genuine mistakes? We have to make those acceptable to acknowledge, correct, even retract, and speak about, to learn and move on from them. Those who don’t learn from their mistakes? Well, they may be doomed to face the consequences. As a note, if we’re going to move towards openness and transparency in science then we need to be particularly careful that those in underrepresented groups aren’t unfairly punished or scrutinized for admitting and speaking about mistakes as these groups are already under a microscope and face unique and frustrating challenges. We cannot allow openness and transparency to be used as one more excuse for someone to tell us no, not if science is to diversify and progress.

‘What kind of person and scientist do I want to be?’

Of all the questions I asked myself, deciding to write this post came down to one: What kind of person and scientist do I want to be? As an animal welfare scientist, I have long believed in being transparent and open in science, I realized that’s who I am as a person as well. Living by my ideals meant not only correcting my mistake but also talking openly and frankly about it. These choices, challenging as they may have been, are the right ones. To err is human and luckily for me I have divine friends, mentors, and colleagues that forgive me my mistakes and sins. I believe that we should all be so lucky and that mistakes should be openly and transparently discussed. For now, I live to science another day and look forward to the challenges, mistakes (which I intend to catch prior to publication), and learning that come with it.

Many of us remember learning about DNA from either science class during our school days, or perhaps, our favourite detective series or film. But what is DNA? How did it get to be used in the criminal justice system in the first place? Most importantly, how is it being incorporated, used and understood by the criminal justice system? This piece provides a short introduction to this area of law in Ireland.

DNA stands for ‘Deoxyribonucleic Acid.’ A sample of DNA can be extracted from our saliva, blood and bone for example. Each person’s DNA is structured differently, meaning that our DNA is unique to each of us alone. DNA profiling was discovered in 1985 by Sir Alec Jeffries and his colleagues in Leicester, and allowed the development of a ‘DNA profile’ from a physical DNA sample. A DNA profile looks similar to a barcode and is a digital representation of a DNA sample. Following this discovery, DNA became a prominent feature in the investigation of crime.

DNA evidence is important in the context of a crime because it can allow for the identification of a specific person at a crime scene and can help to identify unknown bodies. If DNA is found at a scene and then matched with a suspect, it places the suspect at the scene. DNA evidence has been praised because it is often seen as objective, scientific evidence. This has been considered preferable to other forms of evidence such as witness statements which are often subjective and unreliable. Despite these benefits, a problem can arise if DNA is discovered at a crime scene but there are no suspects to test it against. This limits the ability of DNA to aid in an investigation as, although it was obtained from a crime scene, it cannot be compared with anyone. In light of this, the storing or banking of DNA profiles for comparison purposes became desirable for those investigation crimes. DNA storage allows a DNA profile generated from a crime scene sample to be tested against a range of profiles which have already been collected from a pool of people. This is where the central appeal of DNA databasing originated.

Forensic DNA databases organise and store DNA information for the purposes of criminal investigations, and to aid searches for missing or unidentified persons. Therefore, theyallow “rapid comparison” between profiles collected from crime scenes and profiles collected from people who are included in the database (Bieber, 2004: 29). Another frequently mooted (and often debated) benefit offered by DNA databases is the ability to deter people from committing crime, as criminals may have a heightened expectation of being caught.This claim has been disputed however, both because of the difficulties in actually measuring deterrence, but also because criminals may merely adapt to the new circumstances by becoming more forensically aware.The storage of DNA information, even limited information such as a profile, has attracted much debate, particularly in relation to human rights. For example, while databasing is efficient in terms of managing information, a database can also be used “to track, group and classify people with or without their acquiescence” (Jasanoff, 2010: xx). People who have their DNA profiles stored on a forensic DNA database lose privacy, freedom and autonomy, and may be reluctant to engage in active citizenship (such as in protests) given the ability to identify them (Jasanoff, 2010: xxii).

The Irish DNA Database System

Ireland recently incorporated the DNA Database System into law, under the Criminal Justice (Forensic Evidence and DNA Database System) Act 2014. The2014 Act is extensive, but the main purposes of the Actwere neatly summarised by Colm O’Briain (who also provides a wonderfully succinct synopsis of the 2014 Act) (2014: 1-2). The main purposes include an overhaul of the previous legislation and common law practices in the area of taking DNA samples (from several different groups of people such as offenders, suspects and volunteers), the establishment of the DNA Database System, along with providing management and oversight for the System, and the implementing of the Prüm Council Decision, which provides for the international exchange of DNA evidence. Part 8 of the 2014 Actspecifically addresses the DNA Database System, which is currently controlled by Forensic Science Ireland, an independent body based in the Garda Headquarters in Phoenix Park.

Given the potential of DNA databases, one of the central debates which follows is who (or what offences) should qualify for entry onto the database. Typically, sex offenders are mooted as one of the key categories which should be included on a database. However, most databases extend beyond this to include people who have already been convicted of other serious offences such as murder for example. In some jurisdictions, inclusion criteria are based on the length of the sentence which the offence might warrant (premised on the logic that the more serious the offence, the lengthier the punishment). However, it is not always restricted to people who have been convicted of an offence. A DNA database can also include ‘volunteers’, who are innocent people not convicted or suspected of committing an offence. This has led to discussion on the possibility of population-wide databases, although these are often dismissed as being impracticable both on the grounds of human rights and logistical concerns.

In the case of Irish DNA Database System, there are four main ways that a person’s DNA profile can lawfully appear on same (O’Briain, 2014: 9). These are as follows:

If a person is detained for a relevant offence

A ‘relevant offence’ is an offence for which a person may be detained under Section 9 of the 2014 Act. Offences include those under the Offences Against the State Act 1939, along with drug-trafficking offences, murder, false imprisonment, and offences which may be punished by a term of five years imprisonment or more. O’Briain (2014: 8) neatly summarises that the minimum requirement is an offence with a maximum sentence of at least 5 years.

If a person is an offender or former offender

Offenders are identified as those who have been convicted of a relevant offence and are either (1) serving a sentence, on temporary release or subject to a suspended sentence, (2) convicted before or after the commencement of the Act and sentenced to imprisonment, (3) serving a term of imprisonment on foot of a transfer of prisoners provision (so long as the offence involved corresponds to a relevant offence) or (4) subject to the requirements of Part 2 of theSex Offenders Act 2001at the time of the commencement or at any time thereafter.

If a person volunteers to provide a sample and then allows the profile to be entered onto the System

The taking of DNA samples from volunteers is governed by Part 3 of the Act, with the entry of volunteer profiles onto the DNA Database System covered under Section 28.

If a DNA profile was generated under the previous statutory regime, then it may be entered onto the System under transitional provisions.

Prior to the 2014 Act, the Criminal Justice (Forensic Evidence) Act 1990governed the taking of DNA samples. This provision therefore accommodates the transition of samples collected under the previous legislation and allows such samples to be entered onto the System.

The next debate that follows relates to how long we need to retain this information. As a result, retention periods make up a large part of the discourse on the development of DNA databases around the world. One argument for retaining the information for longer periods of time is that it may mean that detection rates are improved. However, retention of such data has also been considered an invasion of privacy. For example, the UK’s DNA database was subject to “serious scrutiny” which culminated in the European Court of Human Rights (ECtHR) reprimanding the UK’s approach to retention of data in the case of S and Marper v United Kingdom (2008) (Kazemian et al. 2011: 49). England, Wales and Northern Ireland were the only countries in the Council of Europe which allowed for the indefinite retention of DNA data of people who were not convicted of a crime. The ECtHR held that this indefinite retention of data was a violation of Article 8 (the right to privacy) of the European Convention on Human Rights (see Prainsack, 2010: 15-16).

Under the 2014 Act, there are different retention regimes for DNA profiles and samples depending on the origin of the sample. For example, volunteers and those who work in the forensic science laboratory have different retention regimes. It is therefore beyond the scope of this piece to explain each of these different regimes. Instead, this piece specifically considers those who are arrested for a ‘relevant’ offence. In Ireland, the retention regime for this category of persons is quite interesting. Under Section 80 of the 2014 Act,if a person is detained for a relevant offence and their DNA profile is entered onto the System, it is only removed in the following situations:

If proceedings against a person are not instituted within 12 months of taking that sample (unless the reason for the delay is because the person has absconded or cannot be found).

In the case that the proceedings have been instituted, then removal will occur if the person is acquitted of the relevant offence, if the charge is dismissed, or the proceedings discontinued.

If the person’s conviction was identified as a miscarriage of justice.

If the person receives an order under the Probation of Offenders Act 1907 for the relevant offence and they have not been convicted of a relevant offence in the 3 years following that order.

This is subject to Section 81, which allows the Garda Commissioner to extend the retention period for 12 months. This power to extend can be done up to a maximum of 6 years (so extending retention by twelve months six times). The person can however appeal this decision to the District Court. However, there is also a provision under Section 93 which allows the Garda Commissioner to apply to the District Court to extend the retention period once there is a “good reason” to do so (see O’Briain, 2014: 16). This indicates that removal is restricted to certain instances, and that retention of the information appears to be preferred by the legislation.

To conclude, DNA forms an important part of investigations into criminal activity and missing persons. DNA evidence can be highly useful, but the potential is limited if there is no source with which to compare it. To combat this limitation, DNA database systems have been established in jurisdictions around the world. Ireland has now joined this group by enacting the Criminal Justice (Forensic Evidence and DNA Database System) Act 2014 which governs this area of law.

References

Bieber F. R., (2004) ‘Science and Technology of Forensic DNA Profiling: Current Use and Future Directions’, in DNA and The Criminal Justice System: The Technology of Justice, edited by Lazer D., The MIT Press, Cambridge, pp 23-62.

It would be easy to imagine that the Dark Universe was a malevolent force in the latest Star Wars movie, it’s leaders the enemy of the Federation, or that dark energy had some kind of demonic origin. However sinister it may sound, the dark side is entirely innocent and, in fact, it comprises 95% of our Universe.

To give this perspective, Earth is an almost infinitesimal speck in the cosmos. It orbits the Sun, one of billions of stars, swirling around and bound together to form our galaxy, the Milky Way. Moreover, there are billions of galaxies in our Universe, each boasting their own hoard of stars and planets! Observational cosmology tells us that these structures, that are made of particles whose physics we understand, only constitute about 5% of everything in the Universe. The rest is dark matter and dark energy.

Dark matter is a special type of matter that neither emits nor interacts with light, but plays an important role in the story of our Universe. More than three quarters of the mass in our Milky Way galaxy (and other galaxies) is the invisible dark matter, rather than the stars and the planets. Therefore, the dark matter creates a large gravitational effect and acts as the glue holding our galaxies together.

Dark energy is even more mysterious. It is a form of energy that drives the accelerated expansion of our Universe. That is, our observations reveal that while stars stay tightly bound in galaxies, as cosmic time marches on the galaxies themselves are moving further away from each other, and our best theory holds dark energy responsible. While we can’t see these entities, we infer that they exist from their effect on things we can see.

It may sound like cosmologists have the Universe sussed, but there are cracks in our Standard Cosmological Model. While we understand the effect of dark matter in the universe, particle physicists are yet to detect its particle in their giant dark matter net experiments. On the other hand the best theory for dark energy, as predicted by quantum physics, is starkly wrong. To put it politely, there is much work to be done! It is possible that we are missing something in our theory of gravity- Einstein’s General Relativity- and may need to invoke some new physics in order to solve the dark energy phenomenon. That is, just as Newtonian gravity, which satisfies experiments on Earth, was revolutionised by Einstein’s theory in order to explain measurements in the solar system, perhaps we need another upgrade to explain even larger-scale observations. We focus on observing how dark matter changes over cosmic time, which sheds light on how dark energy evolves and allows us to test gravity on cosmological scales.

Cosmology has a vast toolbox of independent methods to understand the nature of the Dark Universe and to test the laws of gravity. Techniques include measurements of the brightness of supernovae- the explosive ends of binary pairs of unequal mass stars; exquisite observations of the Cosmic Microwave Background-temperature fluctuations across the sky from the light emitted in very early universe, just 380 000 years after the Big Bang; charting the distant Universe by obtaining precise velocities of and distances to galaxies; and meticulously measuring the shapes of distant galaxies. The latter is called weak gravitational lensing.

Weak gravitational lensing

As we observe a distant galaxy, we collect its light in our telescopes after it has journeyed across the Universe. According to General Relativity, dark matter, like any massive structure, warps the very fabric of the Universe, space-time, as depicted by the grid in the image below. The path that the light travels along, indicated by an arrow, also gets bent with the space-time and as such, the image of the galaxy that we capture appears distorted. The presence of dark matter or massive structures along the line of sight has the effect of lensing the galaxy- making it appear more elliptical in our images and inducing a coherent alignment among nearby galaxies.

A depiction of weak gravitational lensing. As light from distant galaxies travels towards us, it passes by massive structures of dark matter, shown here as grey spheres. Dark matter’s gravity curves the local space-time as well as the path that the light follows. This curvature distorts the images of the background galaxies that we then observe, with the amount of distortion depending on the distribution of dark matter along the light path. By measuring this distortion, we can infer the size and location of invisible massive structures (dotted circles). Image credit; APS/Alan Stonebraker; galaxy images from STScI/AURA, NASA, ESA, and the Hubble Heritage Team.

The stronger the average galaxy ellipticity is in a patch of sky, the more dark matter there is in that region of the Universe, assuming galaxies are in reality, randomly oriented. Therefore, the induced ellipticity of the galaxies is a faint signature of dark matter inscribed across the Universe. If we can measure this alignment to extreme precision, and combine with the equations of General Relativity, we can infer the location and properties of the matter- both visible and dark- between us and the galaxies. By mapping the evolution of the dark-matter structures with cosmic history and documenting the accelerating expansion of space and time, we learn about dark energy.

I work as part of a European team, called the Kilo-Degree Survey, imaging a 5% chunk of the sky a few hundred times the size of the full moon. We have measured the positions and shapes of tens of millions galaxies, as the universe was when (at most) half its current age. While this sounds wildly impressive, we are only now seeing the tip of the iceberg for what is required to truly understand our Universe. That is because while gravitational lensing is a powerful cosmological technique, it is extremely technologically challenging. The typical distortion induced by dark matter as a galaxy’s light travels through the universe, is only enough to alter the shape of that galaxy by less than 1%. As the lensing effect is weak, in order to detect it we need to analyse the images of millions of galaxies. This entails a data challenge, necessitating rapid processing of petabytes of data. A scientific hurdle arises as the weak lensing distortions are significantly smaller than the distortions that arise in the last moments of the the light’s journey. Due to the effect of the Earth’s atmosphere and our imperfect telescopes and detectors, instead of measuring the shapes of galaxies in images that are beautifully resolved like the Hubble Space Telescope image below, in large lensing surveys, galaxies can appear as fuzzy blobs that only span a few pixels. Just to up the ante, the terrestrial effects change between and throughout the night’s observations as the wind, temperature and weather vary, even in the exquisite conditions of the mountaintops of the Atacama Desert, Chile, where lensing data is often collected. In order to isolate the dark matter signature, the nuisance distortions are modelled to extremely high precision and then inverted, allowing an accurate recovery of the cosmological signal. Further complications arise from the physics of the galaxies. They have an intrinsic ellipticity and dynamical processes that we do not perfectly understand, but must also factor into our calculations.

Hubble Space Telescope image of a cluster of galaxies called Abell 1689. The larger yellow galaxies are members of this massive galaxy cluster, bound within a dense clump of dark matter that gravitationally distorts the space and time around the cluster. The small blue objects are galaxies that are behind the cluster, whose light path has become bent as it journeys towards Earth, passing by the cluster. Gravitational lensing effectuates the giant curved blue arcs that you can see surrounding Abell 1689- the distorted images of the distant galaxies . The five blue dots with rainbow crosses are just stars in our own Milky Way Galaxy. Image credit: NASA/ESA/STScI.

The Kilo-Degree Survey, as well as similar American and Japanese experiments, act as stepping stones and a training ground for an epic coming decade for observational cosmologists. We are at the dawn of several major international projects that will survey the sky to greater depths and resolution than ever before. The Large Synoptic Survey Telescope will image the entire Southern sky every few nights, building the deepest and largest map of our cosmos, the Euclid satellite will survey the sky from space, eradicating the worry of Earth’s atmosphere and the the Dark Energy Spectroscopic Instrument will delivery extremely precise locations and velocities of over 30 million galaxies. I look forward to helping these projects to map the distant Universe, trace the evolution of the dark matter and dark energy from 10 billion years ago to the present day and in doing so, bringing us closer to fathoming the other 95% of our Universe: the dark side.

It is a humbling field that asks what the Universe is made of and how its structure evolved for the formation of galaxies and our existence. In our insignificant snippet in the grand story of the Universe, it is remarkable that technology allows us to observe objects at distances beyond our comprehension and that our diverse range of measurements even vaguely fit a consistent model.

Building better bread: Using genetics to study senescence and nutrient content in wheat.

Wheat provides over 20% of the calories consumed worldwide, the second most of any crop after rice (1). Nearly all of us will eat wheat in one form or another every day—staple foods like bread and pasta as well as our favourite treats, from cake and biscuits to certain types of beer. For many cultures, wheat has been essential for thousands of years – it was originally domesticated around 10,000 years ago. The wheat we eat today is descended from 3 different kinds of wild grasses which crossed together at different times to produce the wild ancestor of wheat (Figure 1)(2). Some of us can take it for granted now that we’ll be able to pop down to the corner shop and pick up a loaf of bread at a moment’s notice, but it took thousands of years of selection by farmers to get to the wheat that we’d recognise today.

Figure 1: Wheat originated from two separate crosses between wild grasses. The first occurred around 400,000 years ago, producing wild emmer. Wild emmer then crossed with a different grass around 10,000 years ago. This final cross produced Triticum aestivum, which would be domesticated into bread wheat by humans. At each cross, the genomes of the wild grasses were combined, resulting in Triticum aestivum containing 3 separate genomes (shown as “AABBDD”, with each letter corresponding to one of the ancestral genomes). Figure courtesy of Dr. Cristobal Uauy.

This process of selection was accelerated in the mid-1900s, during the period called the “Green Revolution.” A combination of research into better breeding techniques and new chemical fertilizers, among other factors, contributed to the substantial increase in yield seen during this period. One critical change involved reducing the height of wheat plants which allowed more energy from photosynthesis to be moved into the grain rather than being stored in the leaves and stems of the plants. The yield increases that came about due to the Green Revolution were essential to keep up with the demands of the growing world population.

Most of the work during the Green Revolution was focused on increasing yield alone, boosting the calories that could be extracted from a single field of wheat. But the benefits of wheat extend far beyond calories along. Perhaps surprisingly, wheat provides 25% of the global protein intake (1). Most of us would think of meat or beans as our main sources of protein, but as a staple crop wheat is essential for our protein intake. The nutrients present in the wheat grain, like iron and zinc, are also essential in our diet.

Campaigns to eradicate hunger have had unprecedented success in recent years, and over 89% of the world’s population are able to obtain enough calories for their basic needs (3). Yet increasingly it is the nutrient content of our diets that is leading to the growing health crises globally. At one extreme, malnutrition, defined as the lack of essential nutrients in a diet that has sufficient calories, is one of the leading causes of childhood stunting (3). At the other extreme, obesity in both childhood and adulthood is more common, partly a result of highly calorific food with poor nutritional value becoming so easily available.

Quality Control

During the development of wheat, the period of growth known as “senescence” is critical in regulating the amounts of proteins and nutrients in the developing grain. This is the period where wheat changes from its living, green state to the dead, yellowing state that is so familiar to us at the end of summer. As the leaves die, the molecules in the leaf start to break down and the elements that make up these molecules are transported from the leaves into the developing grain. At the same time, proteins and carbohydrates are also being remobilised from the leaves and moved to the grain. It’s this movement of nutrients and protein that is essential in establishing the quality of the grain. Different levels of protein determine what the grain can be used for. Bread making requires high-protein flour—this protein makes gluten which creates the structure of bread. At the bottom end of the scale, lower quality wheat can be used as feed for livestock and poultry. However, while increased quality is desired, historically a trade-off has been seen between wheat quality and yield (Figure 2).

Figure 2: Increasing quality and yield often leads to a trade-off. As senescence moves later, yield tends to increase, while quality (such as protein and nutrient levels) tends to decrease. The reverse is found with earlier senescence. This leads to a balancing act with the timing of senescence—how can you maximise both yield and quality?

My research is focused on understanding how the process of senescence is controlled in wheat in the hope that we can use this knowledge to increase the nutritional quality of wheat grains. I’m particularly interested in studying genes that are involved in regulating senescence. These genes are called transcription factors, and they act as master regulators in the cell. Transcription factors are able to bind to DNA and influence the expression of other genes. Oftentimes, changing how a transcription factor is expressed can have a large impact on many other downstream targets.

Previous work found a specific transcription factor, known as NAM-B1, which promoted the onset of senescence (4). When this transcription factor wasn’t active, senescence in wheat was significantly delayed (Figure 3). This delayed senescence was also correlated to a drop in the nutritional content of the wheat grain. This suggested that the timing of senescence could directly influence the levels of nutrients and proteins in the grain. Notably, grain size was not affected by the change in nutrient content and senescence timing, suggesting that studying the NAM-B1 gene might provide insight into how to break the trade-off between quality and yield.

Figure 3: Reducing the action of NAM-B1 (left) leads to delayed senescence in wheat compared to the wild-type plant (right). Panel from (4).

I’m now trying to identify new transcription factors that also regulate the timing of senescence. One way that we’re approaching this question is to look for proteins that interact with NAM-B1. We know that the NAM-B1 transcription factor is only functional when it is bound to another transcription factor in the same family, called NACs. This partner might be another copy of itself, or it could possibly be a different NAC transcription factor entirely. We hypothesised that NAC transcription factors that bind NAM-B1 might also regulate senescence. To study this, we can use different experimental techniques in species as varied as yeast and Nicotiana benthamiana, a relative of tobacco, to look for proteins that can bind to NAM-B1.

Once I’ve identified proteins that bind to NAM-B1, the next question is what these proteins do in the wheat plant. A recently developed resource, the wheat TILLING population, has started to make this process much quicker and easier (5). This is a large set of different lines of wheat that have been mutated by a chemical known as ethyl methanesulfonate (or EMS). This chemical leads to specific single-base-pair changes in the DNA sequence. This means that, in at least one of the thousands of different wheat lines, you’re very likely to find a mutation that knocks-out the action of your favourite gene. All of the mutated wheat lines in this TILLING population have had their genes sequenced. This means that all of the mutations in the genes have been identified and catalogued. Now it’s very easy for us to search for mutations in a gene we’re interested in, and we can order the lines we want online.

After identifying mutations in the genes I’m interested in, I then need to start making crosses before I can look at the effect. This is because, unlike us, wheat is a polyploid. This means that wheat has three different genomes, a legacy of the way wheat was domesticated from three different wild grasses (Figure 1). One of the big effects of this is that there are usually at least 3 copies of each gene—one for each genome. So a mutation in one of the three genes may not actually make any difference to the plant, as the other two copies can compensate. As a result, it’s very important to make crosses so that all of the copies of the genes have mutations in them. Otherwise it would be very easy to think that a gene isn’t important as a single mutation doesn’t cause any change. This polyploidy is one of the reasons that breeding in wheat has historically been so difficult, as random mutations are unlikely to happen more than one copy and are thus often obscured—what can be called the “hidden variation” (2).

Once you’ve found your candidate genes, identified mutated lines, and made all of your crosses, you’re ready to see if your gene has an effect. I do most of my trials in the greenhouse, so that I can look at my plants on a smaller scale than you would need for the field. By scoring for senescence onset and progression in my mutant plants, I’m able to identify whether my mutants influence the timing of senescence (Figure 4). This is quite important as earlier senescence may lead to increased nutrient content, so it’s a useful proxy as it’s quick and cheap to study. After identifying mutant lines that have an interesting phenotype (in this case variation in senescence timing), I can directly measure the levels of nutrients such as iron and zinc in the grain. This is an essential final step to see how the variation in senescence timing correlates with the grain nutrient content.

Figure 4: Variation in chlorophyll breakdown in mutant plants. The mutant plant on the left has yellow leaves, indicating that the chlorophyll is being broken down much earlier than the wild-type plant on the right. This suggests that certain pathways associated with senescence are being activated earlier in the mutant plant.

Currently in my research, I’m still in the process of scoring my plants for senescence and identifying interesting mutants. Wheat takes quite a long time to grow in the greenhouse—about 4 months from seed to seed—so it takes quite an investment of time to get through the generations needed for crossing. A new technique for wheat growth called, appropriately, “Speed Breeding” is starting to change this (6). By growing wheat under special LED lighting for 22 hours a day in rooms where the environment is kept constant we can reduce the time for each generation to between 8 and 10 weeks. This is a significant time saving, and is incredibly powerful particularly for generation of new lines from crosses.

It still remains to be seen whether the proteins that I found to interact with NAM-B1 play a significant role in regulating senescence. There are some promising initial results from the mutants I’ve developed, but it will require another few sets of experiments in the glasshouse and the field before I’m sure we’ve honed in on good candidates. Watch this space!

I was never a kid that was sure about what professional career I wanted when I grew up. And this has been a good thing for me, because it has let me experience many different fields, and led me to where I am today.

I was born in the north of Spain, in a mining zone of Asturias. My father was a coal miner and my mother a housewife. I attended a local school and a local high school. My grandmother says I was an unusual kid, preferring to be bought a book rather than a box of sweets. I also started learning English when I was 6 years old, and spent my free time reading historical novels and biographies.

I enjoyed visiting museums and monuments, and I used to search for information in my town’s library before going on an excursion. I loved to write stories and tales, and had always obtained high marks in school, which led my teachers to suggest that I study medicine. But I always changed my mind – from architecture, to journalism or even dentistry, depending on the book I was reading or the museum I’d just visited.

At that age, only one thing was clear: I wanted to be an independent and strong woman like the ones that inspired me. I hadn’t seen many role models during my primary education, but one teacher told us about Marie Curie. At the library, I discovered Rita Levi-Montalcini and the Brontë sisters.

During the last year of high-school I was a mess, and the pressure was high because I had to make a decision. All I had were doubts

In Spain at that time, after finishing your last secondary education course, the students that want to continue to a degree have to take a general exam, the PAU. You could choose the subjects you want to be tested on and, after the exams took place, you were given a mark calculated to take account of your secondary school marks and the results of PAU exams. According to this mark, you could register for certain degrees.

At that point, I decided to take more exams than necessary on the PAU in order to have more options in different types of degree, for example, science, engineering, or languages… But the worst moment of my student life came, and I had to decide.

I had two options on my mind: a Software Engineering degree, and a Biology degree. I must admit it, but at that time I only knew engineering stereotypes and I never liked video games or anything related with hardware, so I decided that a Biology degree would suit me better.

BIOLOGY DEGREE AND NEUROSCIENCE MASTERS

During my degree, I decided that plants and animals were not my passion, but I loved Microbiology, Genetics, Immunology and Neuroscience. I discovered more female role models, researchers who really inspired me, whose lives were incredible to me. I worked hard during my degree and travelled a lot during the summers, thanks to some scholarships that I was awarded (I spent one month in Lowestoft, another in Dublin, and another one in Toronto), and started learning German.

Azahara in the lab

During the second year of my biology degree, I decided that I would become a scientist, and started to look for a professor who would let me gain some experience in their laboratory.

During my penultimate year, I started working in a Neuroscience laboratory, studying the 3D eye degenerating pattern on C3H/He rd/rd mice. After finishing my degree, I decided to enrol in a Masters of Neuroscience and Behavioural Biology in Seville. During this masters, I worked in another Neuroscience laboratory doing electrophysiological studies, trying to understand how information is transformed in the cerebellar hippocampus circuit and how this mechanism could allow us to learn and memorise.

This was a period of my life where I worked a lot of hours, the experiments were very intense, and I had the opportunity to meet important scientist from all the world. I also had a physics peer that analysed all our data, and developed specific programmes in Matlab, which impressed me profoundly.

IMMUNOLOGY PHD

After this period, I continued working in Science, but I decided to start my PhD on Immunology, back in Asturias.

I worked in a laboratory in which, due to my friends in the lab, every day was special. We worked hard studying different types of tumours and testing different molecules, but also had the time to share confidences and laughs. After three years, I became a PhD in Immunology, and as it was the normal thing to do, I started looking for a post-doc position.

Rather than feeling happy or enthusiastic about the future, I discovered myself being upset and demotivated. I really didn’t want to carry on being a scientist. A huge sensation of failure invaded me, but as J.K. Rowling said “It is impossible to live without failing at something, unless you live so cautiously that you might as well not lived at all. In which case, you’ve failed by default”.

I want to specify that I don’t consider my PhD a waste of time – it has given me, apart from scientific publications, many important aptitudes and abilities, such as team work, analysis, problem solving, leadership, organisation skills, effective work habits, and better written and oral communication.

BECOMING A SOFTWARE DEVELOPER

As you might imagine, this was a hard moment of my life. I was unemployed, and doubtful about my professional career – just as I had been after high school.

Thanks to my husband, who supported me while converting my career, I decided to give software development a try. As I didn’t have the necessary money or time to start a new degree, I signed up for a professional course in applications software development. The first days were difficult since all the other students were young and I didn’t feel at ease.

But as I learned software languages as HTML, CSS, JavaScript and Java, I also participated with good results in some software competitions which allowed me to gain confidence.

In 2015 I started working as software developer in .Net MVC, a language that I hadn’t studied during my course, but I had the necessary basics to learn it quickly and become part of a team. For me, one of the most marvellous things about software development is that it consists of team-work.

I also discovered that there are a lot of people working in this field that love to exchange knowledge, and I regularly go to events and meetups. I have also started recently giving talks, and workshops, some of them technological, with the aim of promoting the presence of women in technology.

Women and girls need to be encouraged to discover what software development really is. The software industry needs them. Software can be better, but only if it is developed by diverse teams with different opinions, backgrounds, and knowledge.

Lisa is 37 years old and she has just broken up with her long-term boyfriend. She always imagined that this relationship would lead to marriage and children. Lisa is stable and happy in her career. However, she is now worried that if she does not meet someone new, and soon, her biological clock will be merciless with her and she will be left childless. After a visit to a fertility clinic she decides to freeze her eggs, in order to remove the pressure of having to rush into a new relationship. She wants time and is not ready to date again. She wants to raise a child with a committed partner and believes that freezing her eggs will offer her the best chance of ensuring this.

The story of Lisa is fictional, but reflects the current experience of many women who are availing of social egg freezing.

SPERM, EMBRYOS, EGGS AND THE BIRTH OF SOCIAL EGG FREEZING

Sperm has been successfully frozen since the 1950s using a technique called slow-freezing, and embryo freezing has been an established technique since 1992.[1] On the other hand, egg freezing has been considered experimental until very recently. This was mainly due to the fact that eggs contain a higher amount of water than embryos.[2] The slow freezing of eggs results in the formation of ice crystals, which damage the cell and result in lower success rates.[3] Therefore, historically, egg freezing was only accessible to women with cancer or genetic diseases which cause premature infertility, as a small chance to conceive in the future was better than none at all.[4]

The experimental status of egg freezing was lifted in 2012 in Europe[5] and 2013 in the USA[6] due to advances in freezing methods, particularly a process known as vitrification, which involves rapid cooling of the eggs in liquid nitrogen without the formation of ice crystals. This is highly effective for egg freezing. Therefore, egg freezing began to be offered to healthy, fertile women and social egg freezing was born. This is the idea that women freeze their eggs due to lifestyle reasons, which include: to prevent age-related infertility, to postpone motherhood due to their career, to find a suitable partner, to be financially stable, to be psychologically and emotionally ready to become a mother, and to expand their reproductive autonomy.[7]

LAW, AUTONOMY AND FEMINIST BIOETHICS

My research looks at social egg freezing in Europe from a legal and feminist bioethical perspective. I am assessing the impact of the law on social egg freezing in Europe, particularly in the United Kingdom and Ireland to determine if the law enhances or diminishes women’s reproductive options. For instance, my research has identified that Austria, France and Malta have specific law prohibiting egg freezing for non-medical reasons,[8] diminishing women’s options in those countries.

In the context of autonomy, traditional liberal Bioethics tends to have an individualistic and self-sufficient approach, disregarding the influence power relations (“competing social forces”) can have on someone’s autonomy.[9] In a liberal society, freedom is given to the individual to do as they please with their body, as long as they do not cause harm to others.[10] This highlights the rights of an individual and removes the focus on the responsibilities that may arise from that choice, for example, a child and its well-being.[11]

However, the literature demonstrates that women take their relationships and the power structures that surround them into account when making decisions.[12] For instance, a woman that decides to freeze her eggs is not only thinking about herself, but also about her parents (the future grandparents), her future partner or husband, the health of her future baby (as younger eggs are preferable to avoid chromosomal abnormalities), her finances, her maturity, her employment situation and even society (to increase birth rates in an ageing population). Considering the numerous competing social forces, a woman may feel empowered or oppressed by social egg freezing, and that is why my research adopts a relational autonomy approach from Feminist Bioethics, particularly the theory of self-trust developed by Carolyn McLeod.

Trust is a relational aspect of life involving two people: a patient trusts their doctor on the grounds of an established moral relationship (doctor-patient). Self-trust lacks the two entities, as when one trusts oneself, they are optimistic they will act in a competent manner and within their moral commitment.[13] It is relational in the sense that it is moulded by the responses of others and societal norms, as the other gives a truthful and respectful feedback about yourself.[14] Therefore, if a doctor does not inform realistically of potential risks and future outcomes of egg freezing, a woman may make poor choices.

Research shows that women of reproductive age are misinformed regarding cost, process and effectiveness of egg freezing, and that they want to be accurately informed about it.[15] Further, studies[16] demonstrate that residents and health professionals in the area of Obstetrics and Gynaecology lack accurate information about fertility decline due to age, they have conservative opinions, and are reticent to inform healthy patients about social egg freezing.[17] Medical paternalism could explain this behaviour and it needs to be remedied urgently.

EGG FREEZING – HOW IT WORKS

Women need to be aware that in order to freeze eggs, they are collected in the same way as is done for IVF. Women self-inject hormones for approximately 10-14 days to stimulate ovulation and when the eggs are mature, they are collected surgically under sedation, with small risks of infection and bleeding.[18] Hormone injections are not completely risk-free, and although rare, some women may develop ovarian hyperstimulation syndrome (OHSS)[19], characterised by swollen ovaries, a bloated abdomen, pain, nausea, vomiting and, in severe cases, liver dysfunction and respiratory distress syndrome.[20]

Although IVF using thawed eggs is just as successful as using fresh eggs[21], there are no guarantees that if a woman freezes her eggs, she will definitely have a baby – it just increases her chances.[22] That is simply the reality of fertility treatments, and doctors need to be forthcoming with information. Ideally, women will conceive naturally, having frozen their eggs merely as an ‘insurance policy’ and for peace of mind. [23] The age of the woman impacts the quality of the eggs and doctors recommend that egg freezing occurs prior to the late-thirties.[24] There is considerable emphasis on educating young women on how not to get pregnant. Women also need to be educated about their biological ‘clocks’ and the possibilities and limitations of egg freezing.

CAREER AND THE PURSUIT OF ‘MR. RIGHT’ INSTEAD OF ‘MR. RIGHT NOW’

The reasons why women are freezing their eggs also need to be demystified. Baldwin interviewed women who availed of social egg freezing in the UK, the USA and Norway and discovered that they believe that there is a ‘right time’ to become a mother.[25] This is when, ideally, they are financially secure and in a stable relationship with a man who wishes to raise a child.[26] There has been considerable backlash from the media about social egg freezing, particularly since 2014, when Apple and Facebook offered egg freezing as a benefit for their female employees.[27] It raised concerns that women would be forced into it in order to be considered a ‘team player’ and ascend in their careers, treating motherhood as an inconvenience. However, the main reason why women are freezing their eggs has nothing to do with career advancement, it is actually due to the lack of a suitable partner and to avoid future regret.[28] In fact, one of the women interviewed by Baldwin stated: “I think the media really misrepresent women who have children later. I don’t know a single woman who has put off having babies because of her career, not a single woman I have ever met has that been true for.”[29]

Further, Baldwin and her team coined the term “panic-partnering” to express what future regret meant for the women in the study.[30] This is the fear that they might run out of time and settle for any man, rush into having a child purely to avoid childlessness, and regret this later once the relationship fails.[31] These women also rejected the idea of using a donated egg or having a baby alone with donated sperm, as they wanted the ‘whole package’ – a committed relationship and a father to their genetically-related child.[32] Social egg freezing allows women to ‘buy time’ to find this right partner.

There is ongoing research at the London Women’s Clinic to assess why women are freezing their eggs.[33] Zeynep Gurtin from the University of Cambridge chairs open seminars for single women at the clinic and has identified similar women to those from Baldwin’s research: they are highly educated, in their late thirties and early forties and are “frustrated by their limited partnering options.”[34] These women want to find ‘Mr. Right’, not ‘Mr. Right Now’. Gurtin affirms: “as women become more and more successful in educational and career terms, they have begun to outnumber similarly qualified men, and will need to adjust their partner expectations, embark on single parenting, embrace childlessness, or put some eggs in a very cold basket.”[35]

I recently attended one of these seminars and found the London Women’s Clinic to be a highly positive environment, with counselling and support groups available for their clients. The open seminars are a good opportunity for women to obtain realistic information in clear terms, without it being a sales pitch. Research from the USA[36] affirms that a considerable number of women regret freezing their eggs, particularly if a low number of eggs are obtained. They also complained about a lack of emotional support and counselling.[37] Therefore, it is crucial that clinics offer counselling both during and after egg freezing to ensure that women have realistic expectations as to what the technology can and cannot do.

COSTS

Social egg freezing is not covered by health insurance[38] and is therefore a private procedure, costing between £3000 – £3500 in the UK[39] and approximately €3000 in Ireland.[40] This raises questions of social justice and fairness, as only women with greater financial means can access egg freezing for non-medical reasons. Further research focusing on this issue is necessary.

FREEDOM FROM EMBRYO FREEZING AND LEGAL DISPUTES

The success of egg freezing expands women’s reproductive autonomy as it frees them from having to freeze embryos with a partner. In 2007, a British case reached the European Court of Human Rights (ECtHR). In Evans v. United Kingdom, the applicant, Natallie Evans, had ovarian cancer and underwent IVF with her partner to create six embryos to be frozen. When the relationship ended, the ex-partner removed his consent for the embryos to be used. The applicant could no longer extract eggs and the six embryos were her last opportunity to have a genetic child. The ECtHR discussed whether there was a violation of article 2 (right to life) and article 8 (right to respect for privacy and family life). It was decided that since embryos do not have a right to life in the UK that there was no violation of article 2.[41] The Court also found that overruling someone’s withdrawal of consent, even in this exceptional case, would not violate article 8 or exceed the margin of appreciation.[42]

In other words, the ECtHR decided that the ‘right not to procreate’ of the ex-partner overruled the ‘right to procreate’ of the applicant and the embryos had to be discarded. Ms. Evans could have created embryos with a donor sperm, avoiding legal disputes. However, as has been demonstrated, women wish to have a partner to raise a child with. The options for women have expanded and if they freeze their eggs it is their sole decision to use them for IVF with a partner or sperm donor, to donate them to another woman, or for research.

GAMETE STORAGE AND A CALL TO ACTION

Current technology allows eggs to be frozen indefinitely. In the UK, the Human Fertilisation and Embryology Act determines that gametes can be stored for up to 10 years for non-medical reasons and up to 55 years for medical reasons.[43] This reduces the benefits of social egg freezing. For instance, if a woman freezes her eggs at age 27 to ensure she has the best possible eggs, she will have to use them prior to her 37th birthday. There is no time extension, which could cause a considerable amount of pressure for this woman, who believed she was buying herself extra time.

Kylie Baldwin, one of the most prominent researchers of social egg freezing in the UK, has created a petition to convince the UK Government and Parliament that the law needs to change.[44] Signatures from UK citizens and residents are requested at this moment, prior to the 27th of October 2018, in order to be reviewed by the UK Government. This movement is highly important, and I advise all UK citizens and residents to sign it.

In Ireland, the General Scheme of the Assisted Human Reproduction Bill 2017 also adopts this 10-year time limit for non-medical gamete freezing.[45] If the bill remains unaltered when passed as a law it will raise the same issues that are currently being debated in the UK. Perhaps, there is still time for an amendment in the Irish bill.

CONCLUSION

Social egg freezing is quite a recent development and further interdisciplinary research is required to examine the legal, sociological, feminist and economic implications of it. This is needed in order to gain a complete picture of the technology and the impact it has on women’s lives, relationships and society as a whole. There is a risk that women are gambling with their fertility by ‘putting all their eggs in one basket’. That is why social egg freezing must be approached with caution and with realistic expectations by women in order to avoid potential disappointment. However, it is an exciting opportunity, and it is quite clear that the rights and freedoms available to women in relation to their reproductive autonomy have expanded significantly in the last century. This is further evidenced by the very recent successful result in Ireland’s referendum to repeal the 8th amendment (a constitutional ban on abortion which was introduced in 1983 and which allowed for abortion only where a woman’s life was at risk).

I would like to dedicate this post in memory of Grace McDermott, co-founder of Women Are Boring, who I met at the induction of our PhD programme in 2014 and became friends with. She was a wonderful person and I am happy to have had her in my life. I am sure she would have strong opinions about social egg freezing and we would have had some lively discussions about the current state of it.

[10] Catriona Mackenzie, ‘Conceptions of Autonomy and Conceptions of the Body in Bioethics’ in Jackie Leach Scully, Laurel E. Baldwin-Ragaven and Petya Fitzpatrick (eds), Feminist Bioethics: At the Center, on the Margins (The John Hopkins University Press 2010) 72-73

[25] Kylie Baldwin, ‘’I Suppose I Think to Myself, That’s the Best Way to Be a Mother’: How Ideologies of Parenthood Shape Women’s Use for Social Egg Freezing Technology’ (2017) 22 Sociological Research Online 1, 5

At the start of 1995, we knew of only 9 planets – Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto. Although we have since lost Pluto, we have now confirmed over 3,700 exoplanets — planets orbiting a star other than our sun. These exoplanets have been discovered by various methods, but the vast majority have been detected via indirect methods — measuring the influence of an exoplanet on its host star. We have also managed to directly image a number of exoplanets. This is the most difficult technique since most planets are lost in the bright glare of their host star. In recent years, we have discovered that in addition to the exoplanet companions — exoplanets orbiting a host star, there have been a number of discoveries of so-called rogue, or free-floating planets. These are planetary-mass objects (less than about 13 times the mass of Jupiter) with no host star, wandering the Milky Way alone!

There are currently two theories about the formation of these isolated planets. The first theory suggests that they form similar to a star like our Sun — through the collapse of a massive interstellar cloud composed of molecular gas and dust. Once enough material is compressed at the centre of the cloud, nuclear fusion is ignited in the core, and a star is born. Once nuclear fusion is established, a star will continue to shine for about 10 billion years. However, in the case of our free-floating planets, we think that the core did not accrete enough material to trigger nuclear fusion. These objects can be thought of as ‘failed stars’, and spend their entire lifetimes cooling down. The other theory proposes that the free-floating planets were ejected from a planetary system. This can happen due to gravitational interactions with other planets within the system or a close encounter with another star. These interactions could fling a planet out of its orbit and leave it free to travel through interstellar space. Most likely, the free-floating planets that we have discovered to date formed through both of the theories discussed here, but we have not yet found a way to distinguish them from each other.

Free-floating planets pose a huge advantage to astronomers studying exoplanets. The population of free-floating planets share a remarkable resemblance with the small population of directly-imaged planets that we have discovered. The free-floating and companion exoplanets share similar masses, temperatures, ages and sizes, but while the companion exoplanets are extremely hard to image, the isolated planets are much easier since they do not have a bright host star nearby. New instruments and technologies are currently being developed so that we may study companion exoplanets in detail in the future. In the meantime, the free-floating objects can be studied in exquisite detail and act as useful analogues for the directly-imaged companions, providing clues on what we might expect.

Brightness Modulations Signal Atmospheric Features

While it can take several hours to obtain an image of an exoplanet orbiting its host star, a medium-sized telescope can capture images of a free-floating planet on ~5 minute timescales. We cannot resolve the surface of a planet since it is too far away, but we can make use of the fact that they rotate to try to identify the presence of weather patterns in the planet’s atmosphere. This is done through a technique called ‘photometric variability monitoring’, which basically means measuring the brightness of an object over time. By monitoring the brightness over many hours we can approximate what the upper atmosphere of such an object looks like. The video below shows an artist’s concept of a brown dwarf with atmospheric bands of clouds, thought to resemble the clouds seen on Neptune and the other outer planets in the solar system. The dots on the bottom show the measured brightness of the planet over time, called the lightcurve of the planet.

Artist’s conception of a rotating free-floating planet with bands of clouds resembling those seen on Neptune. The dots on the bottom show the measured brightness of the object over time.

PSO-318.5-22: A Cloudy Free-floating Planet

In 2015 I used the New Technology Telescope in La Silla, Chile to observe the free-floating planet PSO J318.5-22. PSO 318.5-22 is a free-floating planet situated 80 light-years from earth, with a temperature of 800°C and a mass 7 times that of Jupiter. This object is unusually red compared to other objects with similar temperatures, and this is thought to be due to the presence of very thick clouds in its atmosphere. Using the images, we could measure the brightness of this object in each frame, and found that the brightness of this isolated planet changed by up to 10% over the course of 5 hours. Follow-up observations showed that the lightcurve is periodic, repeating itself every ~8.6 hours, indicating that this was the rotational period of the planet. Every 8.6 hours an atmospheric feature, most likely silicate clouds and iron droplets, would rotate in and out of view. This was the first detection of weather on an planetary-mass object, and hinted that these atmospheric features may be common on extrasolar planets.

We then went on to observe PSO J318.5-22 simultaneously using the Hubble Space Telescope and the Spitzer Space Telescope, which allowed us to track the brightness of our target in a variety of different wavelengths with unprecedented accuracy. The new lightcurves revealed that although all lightcurves showed brightness modulations in agreement with a 8.6 hour rotational period, the lightcurves obtained from the Hubble and Spitzer telescopes appeared ‘out of phase’. This means that when the planet appeared at its brightest in the Hubble images, it appeared very faint with the Spitzer Space Telescope, and vice versa. The Hubble and Spitzer Telescopes differ in the wavelengths they use — Hubble observations are in the near-infrared while Spitzer probes longer wavelengths in the mid-infrared. Different wavelengths are sensitive to different heights in the atmosphere of the planet — the Hubble telescope sees deep into the planet’s atmosphere while the Spitzer wavelengths only see the highest altitudes. The observed shifts between lightcurves suggest that we are observing different layers of clouds located at different vertical positions in the atmosphere. These types of observations have thus allowed us to explore both the horizontal and vertical cloud structure of PSO 318.5-22, a rogue planet lying 80 light-years away.

Future Exoplanet Companion Studies with JWST

Now that we have developed the technique of photometric variability monitoring, we hope to extend these studies to the directly-imaged exoplanet companions once the James Webb Space Telescope (JWST) launches. Due to be launched in 2020, JWST will revolutionise all fields of astronomy by providing unparalleled sensitivity to astrophysical signals at a wide range of wavelengths. JWST will allow us to extend the variability monitoring discussed above to exoplanet companions, such as the HR8799bcde planets shown below. This system of four planets, called HR8799bcde is so far the only multi-planet system that has been imaged. By re-observing the HR8799bcde system over a number of years, astronomers could track their movement around their host star. The four planets shown here share very similar properties to free-floating planets such as PSO J318.5-22, and so we expect that they will show similar brightness changes over time. Current telescopes cannot obtain images of these planets at the sensitivity and cadence needed to measure photometric variability, but JWST will allow us to carry out these measurements for the first time.

Video showing four exoplanets orbiting their host star. The host star HR8799 harbours four super-Jupiters with periods that range from decades to centuries. Astronomers re-observed this system over a number of years to map out the orbits of these four exoplanets. Video: Jason Wang and Christian Marois.

Picture an expansive galaxy in your head. A vast space with thousands of twinkling dots.
As seconds pass, connections flash from dot to dot – fast enough to disappear before you can even focus on one – generating an intricate, pulsating web.

I’m not a cosmologist. I’m a cancer cell biologist, and I study subcellular signalling. You probably already know that cancer is a disease of uncontrolled cell growth. But cancer cells have not gained an alien skill in order to do so; they use the exact same growth signalling pathways that every other one of your cells uses. In a cancer cell, relatively small tweaks occur in normal signalling pathways, which render them dysfunctional, often hyperactive. But the galaxy-like expansive and pulsating web of communication imagery goes part way in describing the system we are dealing with. Subcellular signalling is vast and mind-numbingly complicated, and in all of the decades of molecular biology so far, we are still piecing links together with every additional study.

But a galaxy-like network, somewhat like the task, is quite overwhelming and daunting. For simplicity, let’s imagine one signalling pathway in isolation, a bit like a chain of children in the school playground, playing a game of whispers. A message is passed from one child to the next child down the line, but instead of the usual hilarity of miscommunication, our hypothetical game is pretty exact. An un-fun version of playground whispers, if you will. Much like those children in the playground, in a cell, a message is passed from one part of the cell to another by sequential messenger molecules. For example, a message can be sent around the body in the blood in the form of a molecule. This molecular message binds to a receptor that sits on the surface of a cell prised waiting for this exact signal. The binding of the molecular message to this receptor flicks it from off to on. An on receptor turns on a nearby molecule, this on molecule turns the next molecule on, and so on and so forth, until the message is passed to the nucleus. Here, it tells the cell which genes are to be transcribed, in order to build proteins to accomplish a specific cellular task. In cancer, one or more of these signalling pathways stops working correctly because of a genetic mutation in a messenger molecule. To continue the metaphor, basically a child in the middle of chain decides to go a bit rogue.

Cell signalling

Let’s take an example. There’s a proliferative signalling pathway called the mitogen activated protein kinase pathway or simply MAPK to its friends. In the middle of it is a molecule called Ras. Normally, this pathway fires a nice concise signal in response to a message from somewhere else in the body, that tells this cell that it needs to grow and divide into two daughter cells. Maybe, for instance, the human overlord has acquired a pesky paper cut and the cells need to grow to close the wound. In that case, the message binds to a receptor on the cell, a growth factor receptor, which communicates to Ras, and Ras turns on to communicate the signal to the next molecule, which passes on to the next, and down and down a chain of messenger molecules into the nucleus, which initiates the steps that need to take place for the cell to divide. In this normal efficient situation, Ras returns to its off state as soon as it has efficiently passed its signal onto the next molecule, and in doing so, ensures a safe and distinct message is given. A successful game of un-fun playground-whispers, and everyone can pat themselves on the back and go about their day.

A common mutational event in cancer is that Ras picks up a genetic mutation that means it becomes stuck in the on position. We call this constitutive activation, which basically just means stuck-in-the-on-position. With Ras constantly on, the signal is continuously fired from it to the next molecule, even in instances when it is inappropriate for the cell to divide. Hence, these cells acquire uncontrolled growth, outgrow their neighbours and can continue to mutate and grow and move and invade and…I think you all know how this story ends.

So the answer seems logistically simple – turn Ras off, right? However, frustratingly Ras turns out to be a pretty much un-druggable molecule. Despite huge effort, the 3D surface of the protein doesn’t have pockets in which a potential drug could bind and correct it. However, efforts have been more successful in drugging its next-in-line messenger molecule, Raf. If, in our hypothetical chain of school children playing whispers, there’s one mischievous kid in the middle adding rubbish in willy-nilly, that didn’t come from anyone before her, the damage is minimized if the next partner in line simply doesn’t pass the nonsense on. Raf inhibitors showed great promise in pre-clinical development, and in clinical trials of metastatic melanoma, a truly horrible aggressive disease. Things started to look up.

Until – Bam! The drugs stop working. In a patient who initially responded well, the disease comes back – and it’s more aggressive then ever. A heart-breaking yet frustratingly common scenario. The cell is a highly dynamic system with a lot of inter-connected pathways that can flip back and forth when needed, and a cancer cell, because of its unstable genome that is prone to mutations, is even more adaptable. You can put a road block in the signal chain – Ras’s whisper-partner keeps quiet, but cunning Ras simply finds another buddy in the playground to blurt rubbish to, aaand we’re back to square one. As useful to our understanding as chain-schemes are, the network-like galaxy, in all of its sobering complexity, is more realistic. You can start to get an idea of the difficulty of treating this disease.

So, what now? Some of my current work, and that of others, is trying to optimize multi-target approaches. If a cell can circumvent the Raf or similar inhibitor road-blocks quite rapidly, we must simultaneously or synchronously take away its back up options, in a highly choreographed bank and forth dance to the death. The idea is that a multi-target network approach, which removes back-door options, minimizes adaptation of cancer cells to inhibitors and hence drug resistance. The hope is that if we design smart enough multi-target approaches, we might just be able to topple the pillars of survival that these cells rely on.

Max Delbrück, a 20th century geneticist wrote;

“Any living cell carries within it the experiences of a billion years experimentation by its ancestors. You cannot expect to explain so wise an old bird in a few simple words.”

“You’re not what I expected when you said you were a shark scientist.” Gee, thanks. I can’t tell you how many times I’ve heard that I don’t live up to someone’s preconceived mental image of what I should look like as a “shark scientist.” It doesn’t change the fact that I’m a marine biologist though, and that I am very passionate about my field.

I recently wrapped up my Masters in Marine Biology, focusing on “Habitat use throughout a Chondrichthyan’s life.” Chondrichthyans (class Chondrichthyes) are sharks, skates, rays, and chimaeras. Today, there are more than 500 species of sharks and about 500 species of rays known, with many more being discovered every year.

Over the last few decades, much effort has been devoted towards evaluating and reducing bycatch (the part of a fishery’s catch that is made up of non-target species) in marine fisheries. There has been a particular focus on quantifying the risk to Chondrichthyans, primarily because of their high vulnerability to overfishing. My study focused on five species of deep sea chimaeras (not the mythical Greek ones, but the just-as-mysterious real animal) found in New Zealand waters:

• Callorhynchus milii (elephant fish),

• Hydrolagus novaezealandiae (dark ghost shark),

• Hydrolagus bemisi (pale ghost shark),

• Harriotta raleighana (Pacific longnose chimaera),

• Rhinochimaera pacifica (Pacific spookfish).

These species were chosen because they cover a large range of depth (7 m – 1306 m), and had been noted as being abundant despite extensive fisheries in their presumed habitats; they were also of special interest to the Deepwater Group (who funded the scholarship for my MSc).

Although there is no set definition for what constitutes as “deep sea,” it is conventionally regarded to be >200 m depth and beyond the continental shelf break (Thistle, 2003); in this zone, a number of species are considered to have low productivity, leading to them having a highly vulnerable target of commercial fishing (FAO, 2009). Deep sea fisheries have become increasingly economically important over the past few years as numerous commercial fisheries become overexploited (Koslow et al., 2000; Clark et al., 2007; Pitcher et al., 2010). Major commercial fisheries exist for deep sea species such as orange roughy (Hoplostethus atlanticus), oreos (several species of the family Oreosomatidae), cardinalfish, grenadiers (such as Coryphaenoides rupestris) and alfonsino (Beryx splendens). Many of these deep sea fisheries were not sustainable (Clark, 2009; Pitcher et al., 2010; Norse et al., 2012) with most of the stocks having undergone substantial declines.

Deep sea fishing can also cause environmental harm (Koslow et al., 2001; Hall-Spencer et al., 2002; Waller et al., 2007; Althaus et al., 2009; Clark and Rowden, 2009). Deep sea fisheries use various types of gear that can leader to lasting scars: bottom otter trawls, bottom longlines, deep midwater trawls, sink/anchor gillnets, pots and traps, and more. While none of this gear is solely used in deep sea fisheries, all of them catch animals indiscriminately and can also damage important habitats (such as centuries-old deep sea coral). In fact, orange roughy trawling scars on soft-sediment areas were still visible five years after all fishing stopped in certain areas off New Zealand (Clark et al ., 2010a).

Risk assessment is evaluating the distributional overlap of the fish with the fisheries, where fish distribution is influenced by habitat use. For sharks, that risk assessment included a lot of variables: there are a number of shark species (approximately 112 species of sharks have been recorded from New Zealand waters) with many different lifestyles, differences in their market value for different body parts (like meat, oil, fins, cartilage), what body parts they use for sharks (for example, some sharks have both their fins and meat utilised but not their oil; some just have their fins taken, etc.) and how to identify sharks once on the market (Fisheries Agency of Japan, 1999; Vannuccini, 1999; Yeung et al. 2000; Froese and Pauly, 2002; Clarke and Mosqueira, 2002).

In order to carry out a risk assessment, you have to know your study animals pretty well. It should come to no surprise that little is known about the different life history stages of chimaeras, so I did the next best thing and looked at Chondrichthyans in general. My literature review synthesized over 300 published observations of habitat use for these different life history stages; from there, I used New Zealand research vessel catch data (provided by NIWA, the National Institute of Water and Atmospheric Research) and separated them by species, sex, size, and maturity (when available). I then dove into the deep end of using a computer language called “R,” which is used for statistical computing and graphics. Using R programming software, I searched for the catch compositions based on the life history stage I was looking for (example: looking for smaller sized, immature fish of both sexes and little to no adults when in search for a nursery ground).

The way we went about this thesis differs in that we first developed hypotheses for characteristics of different habitat use, rather than “data mining” for patterns, and it therefore it has a structured and scientific approach to determining shark habitats. Our results showed that some life history stages and habitats for certain species could be identified, whereas others could not.

These complex—and barely understood— deep sea ecosystems can be overwhelmed by the fishing technologies that rip through them. Like sharks, many deep sea animals live a k-style lifestyle, meaning that they take a long time to reach sexual maturity and once they are sexually active, they give birth to few young after a long gestation period. This lifestyle means these creatures are especially vulnerable since they cannot repopulate quickly if overfished.

In order to manage the environmental impact of deep sea fisheries, scientists, policymakers and stakeholders have to identify the ways to help re-establish delicate biological functions after the impacts made by deep sea fisheries. Recovery—defined as the return to conditions before they were damaged by fishing activities—is not a unique concept to just deep sea communities, and is usually due to site-specific factors that are often poorly understood and difficult to estimate. Little is known about biological histories and structures of the deep sea, and therefore the rates of recovery may be much slower than shallow environments.

Management of the seas, especially the deep sea, lags behind that of land and of the continental shelf, but there is a number of protection measures already being put in place. These actions include, but are not limited to,

• regulating fishing methods and gear types,

• specify the depth that one can fish at,

• limit the volume of bycatch, limit the volume of catch,

• move-on rules, and

• closure of areas of particular importance.

Modifications to trawl gear and how they are used have made these usually heavy tools less destructive (Mounsey and Prado, 1997; Valdemarsen et al. 2007; Rose et al. 2010; Skaar and Vold 2010). Fishery closures are becoming more common, with large parts of EEZs (exclusive economic zone) being closed zones for bottom trawling (e.g. New Zealand, North Atlantic, Gulf of Alaska, Bering Sea, USA waters, Azores) (Hourigan, 2009; Morato et al. 2010); the effectiveness of these closures is yet to be established.

And while this approach, dubbed the “ecosystem approach,” to fisheries management is widely advocated for, it does not help every deep sea animal or structure. Those that cannot move (sessile) are still in danger of being destroyed. As such, ecosystem-based marine spatial planning and management may be the most effective fisheries management strategy for protecting the vulnerable deep sea critters (Clark and Dunn, 2012; Schlacher et al. 2014). This management strategy can include marine protected areas (MPAs) to restrict fishing in specific locations and other management tools, such as zoning or spatial user rights, which will affect the distribution of fishing effort in a more effective manner. Using spatial management measures effectively requires new models and data, and will always have their limitations given how little data in regards to the deep sea exists, and that this particular environment is hard to get to.

So what does it all mean in regards to my thesis? Well, for one thing, there is a growing acknowledgement these unique ecosystems require special protection. And like any scientist knows, there are still many unanswered questions about just how important this environment is (especially certain structures).

On a more shark-related note, not all life-history stage habitats were found for my chimaeras, and this may be because these are outside of the coverage of the data set (and likely also commercial fisheries), or because they do not actually exist for some Chondrichthyans. That cliffhanger is research for another day, I suppose…

This project could not have been done without the endless amount of support of my family and friends; those who have supported me since day one of my marine biology adventures. They’re the ones who stick up for me whenever I hear, “You’re not what I expected when you said you were a shark scientist.” I am not really sure what the stereotype of a shark scientist is supposed to be, thankfully I grew up where you accept and judge people by who they are and what they do. However I see this as a challenge, as it sets the stage up for me to show the mind of a shark scientist can come in all kinds of packages.

As a final note, I’d like to thank the New Zealand Seafood Scholarship, the Deepwater Group, as well as researchers from National Institute of Water and Atmospheric Research (NIWA) who provided funding, insight and expertise that greatly assisted the research. The challenge of venturing into complex theories is that not all agree with all of the interpretations/conclusions of any research, but it is a basis for having a discussion, which can only be good for all.

References:

Thistle, D. (2003). The deep-sea floor: an overview. Ecosystems of The Deep Oceans. Ecosystems of the World 28.

FAO. 2009. Management of Deep-Sea Fisheries in the High Seas. FAO, Rome, Italy.

All is not as it seems

We all delight in discovering that what we see isn’t always the truth. Think optical illusions: as a kid I loved finding the hidden images in Magic Eye stereogram pictures. Maybe you remember a surprising moment when you realised you can’t always trust your eyes. Here’s a quick example. In the image below, cover your left eye and stare at the cross, then slowly move closer towards the screen. At some point, instead of seeing what’s really there, you’ll see a continuous black line. This happens when the WAB logo falls in a small patch on the retinae of your eyes where the nerve fibres leave in a bundle, and consequently this patch has no light receptors – a blind spot. When the logo is in your blind spot, your visual system fills in the gap using the available information. Since there are lines on either side, the assumption is made that the line continues through the blind spot.

Illusions reveal that our perception of the world results from the brain building our visual experiences, using best guesses as to what’s really out there. Most of the time you don’t notice, because the visual system has been adapted over years of evolution and then been honed by your lifetime of perceptual experiences, and is pretty good at what it does.

For vision scientists, illusions can provide clues about the way the visual system builds our experiences. We refer to our visual experience of something as a ‘percept’, and use the term ‘stimulus’ for the thing which prompted that percept. The stimulus could be something as simple as a flash of light, or more complex like a human face. Vision science is all about carefully designing experiments so we can tease apart the relationship between the physical stimulus out in the world and our percept of it. In this way, we learn about the ongoing processes in the brain which allow us to do everything from recognising objects and people, to judging the trajectory of a moving ball so we can catch it.

We can get insight into what people perceived by measuring their behavioural responses. Take a simple experiment: we show people an arrow to indicate whether to pay attention to the left or the right side of the screen, then they see either one or two flashes of light flash quickly on one side, and have to press a button to indicate how many flashes they saw. There are several behavioural measures we could record here. Did the cue help them be more accurate at telling the difference between one or two flashes? Did the cue allow them to respond more quickly? Were they more confident in their response? These are all behavioural measures. In addition, we can also look at another type of measure: their brain activity. Recording brain activity allows unique insights into how our experiences of the world are put together, and investigation of exciting new questions about the mind and brain.

Rhythms of the brain

Your brain is a complex network of cells using electrochemical signals to communicate with one another. We can take a peek at your brain waves by measuring the magnetic fields associated with the electrical activity of your brain. These magnetic fields are very small, so to record them we need a machine called an MEG scanner (magnetoencephalography) which has many extremely sensitive sensors called SQUIDs (superconducting quantum interference devices). The scanner somewhat resembles a dryer for ladies getting their blue rinse done, but differs in that it’s filled with liquid helium and costs about three million euros.

A single cell firing off an electrical signal would have too small a magnetic field to be detected, but since cells tend to fire together as groups, we can measure these patterns of activity in the MEG signal. Then we look for differences in the patterns of activity under different experimental conditions, in order to reveal what’s going on in the brain during different cognitive processes. For example, in our simple experiment from before with a cue and flashes of light, we would likely find differences in brain activity when these flashes occur at an expected location as compared to an unexpected one.

One particularly fascinating way we can characterise patterns of brain activity is in terms of the the rhythms of the brain. Brain activity is an ongoing symphony of multiple groups of cells firing in concert. Some groups fire together more often (i.e. at high frequency), whereas others may also be firing together in a synchronised way, but firing less often (low frequency). These different patterns of brain waves generated by cells forming different groups and firing at various frequencies are vital for many important processes, including visual perception.

What I’m working on

For as many hours of the day as your eyes are open, a flood of visual information is continuously streaming into your brain. I’m interested in how the visual system makes sense of all that information, and prioritises some things over others. Like many researchers, the approach we use is to show simple stimuli in a controlled setting, in order to ask questions about fundamental low level visual processes. We then hope that our insights generalise to more natural processing in the busy and changeable visual environment of the ‘real world’. My focus is on temporal processing. Temporal processing can refer to a lot of things, but as far as my projects go we mean how you deal with stimuli occurring very close together in time (tens of milliseconds apart). I’m investigating how this is influenced by expectations, so in my experiments we manipulate expectations about where in space stimuli will be, and also your expectations about when they will appear. This is achieved using simple visual cues to direct your attention to, for example, a certain area of the screen.

When stimuli rapidly follow one another in time, sometimes it’s important to be parse them into separate percepts whereas other times it’s more appropriate to integrate them together. There’s always a tradeoff between the precision and stability of the percepts built by the visual system. The right balance between splitting up stimuli into separate percepts as opposed to blending them into a combined percept depends on the situation and what you’re trying to achieve at that moment.

Let’s illustrate some aspects of this idea about parsing versus integrating stimuli with a story, out in the woods at night. If some flashes of light come in quick succession from the undergrowth, this could be the moonlight reflecting off the eyes of a moving predator. In this case, your visual system needs to integrate these stimuli into a percept of the predator moving through space. But a similar set of several stimuli flashing up from the darkness could also be multiple predators next to each other, in which case it’s vital that you parse the incoming information and perceive them separately. Current circumstances and goals determine the mode of temporal processing that is most appropriate.

I’m investigating how expectations about where stimuli will be can influence your ability to either parse them into separate percepts or to form an integrated percept. Through characterising how expectations influence these two fundamental but opposing temporal processes, we hope to gain insights not only into the processes themselves, but also into the mechanisms of expectation in the visual system. By combining behavioural measures with measures of brain activity (collected using the MEG scanner), we are working towards new accounts of the dynamics of temporal processing and factors which influence it. In this way, we better our understanding of the visual system’s impressive capabilities in building our vital visual experiences from the lively stream of information entering our eyes.