Research questions often come about in response to an identified problem. Whether this problem is social, scientific, political, micro or macro, researchers attend to the minutest details with their chosen methods in order to affect a change in our understanding of these problems, and ultimately the problems themselves. My own field of feminist media research has, at its core, issues of social justice and disruption; there is a conscious and deliberate aim to readdress inequalities within the media-gender relationship. For many researchers and institutions, the aim is social change.

And change does happen. Since the 1970s – since feminism was realised as a worldwide movement – feminist media research has been disrupting and enriching discussions about the relationship between gender and media in society. The first feminist critique of media was heard in Mexico City at the first of three UN Decades for Women conferences in 1975, and this milestone, where women’s representation by media and within its structures was a central issue, added another critical dimension to the wider feminist movement, and to academia (see Byerly, 2016). The beginnings of feminist media scholarship were rooted within this identification of a problem and the desire to disrupt the status quo for the sake of equality and justice. In doing so, the public, lived experiences of women within media industries became an integral part of how research was directed and articulated in policy and institutional strategy. Today, the field is still evolving and challenging researchers to investigate the structures of our media institutions with fresh critical thinking.

The potential for direct social impact is inherent within feminist research.

The potential for direct social impact is inherent within feminist research. As some scholars have written about the relationship between feminist research and activism, “Many feminist researchers have been influenced by the research questions generated by women’s movements and consider it a moral imperative that their research should include women’s voices. They wish to change both the subjects and the objects of study” (Ackerly & True, 2010).

Among many academic institutions worldwide there is a strong and visible commitment to feminist research, gender studies, and the social good that can be achieved through engaging with communities. However, there is a problem within the protocols and practices of higher level education. The reality which all too often acts as a book end to huge swathes of good, painstakingly uncovered knowledge is one of inertia and stasis. From an insider’s perspective, you can see time and time again, significant pieces of research entering a normalised cycle of publication and citation with the full potential of the research itself locked behind a paywall. Uninterrupted access to the vast majority of this knowledge requires you to be a member of the institutional framework – an academic, or a student – which again comes with its own price tag. The reality of these institutional frameworks is arguably the biggest fault of academia. It is a reality that requires us to think differently about the research journey.

It is a reality that can be readdressed with the understanding that the full potential of socially impactful research resides in the encouragement and inclusion of public action and participation.

My journey from research question to publication brought me face-to-face with the stickiest catch-22 of higher level education. In the summer of 2017 I investigated gender in the music industry. Over these months I interviewed six women who occupy various roles within the music industry in Ireland. Coupled with this was a content analysis of two popular music magazines; Hot Press(Ireland) and Rolling Stone(USA). I tracked over forty years of gender on the covers of these magazines and applied a total of 8,721 individual categorisations to the people in these spaces. The result: gender matters in the music industry, and it matters in very specific ways.

It was the decision to make my research publicly available on my website* earlier this year which revealed to me the voices that academic protocols and paywalls are excluding from the conversation. They are the people to whom the research is most relevant.

The first article I published discussed my initial research question and the journey which led me to the real question that needed to be asked about gender, power and visibility in the music industry. Specifically, in relation to women music producers, the question of ‘why are there so few?’ is imprecise, and the figure of ‘less than five percent’ extensively cited by articles is inaccurate. There is a difference between what we see in the visible, widely established music industry, and what is actually there; the question we need to ask is ‘why do we seeso few?’

The second article detailed the investigation of gender on music magazine covers, and for the first time the shared transatlantic trends of how gender is constructed on the covers of Hot Press and Rolling Stonewere uncovered and articulated.

Combined, these two articles have been read 947 times in almost 60 countries across five continents. Within the space of six months the reach of these articles has exceeded my expectations, and their longevity endures today as they continue to be read. Since the first article was published I’ve received comments and emails from female identifying people in the music industry congratulating me on the research and thanking me for it. Through participatory spaces within online music networks, this research has travelled. Though I cannot say for sure that the people in Albania, Guadeloupe, Mongolia, or Serbia would not have read this research had it gone through academic protocols and been published by peer-reviewed journals, I can certainly speculate as to the difference in reach and accessibility.

One approach that aims to disrupt the traditional boundaries between researcher and subject, and calls for the restructuring of academic frameworks is Participatory Action Research (PAR). Through witnessing the tangible social impact of research sharing in public space, PAR has become critically important to how I conceptualise the research journey.

“Feminist principles of equality, reciprocity, partiality and valuing the voices of ordinary people as expert and authoritative on their own lives are reflected in PAR” (Pain, Kindon & Kesby, 2007).

PAR also asks us to challenge ourselves as researchers.

“PAR introduces new questions about representation, audience and product that compel us to rethink the role and impact of research. More than an epistemological shift, this approach brings commitments to action that push researchers to work in new and sometimes unfamiliar ways” (Cahill & Torre, 2007).

The argument presented by this article is directed squarely at the protocols, politics and paywalls of academic institutions. By all means, we need the peer-review system; research needs to be critiqued and scrutinised by an objective overseer before it is given the zeal of academic approval in a journal. But the cycle of publication and citation behind closed doors needs to be disrupted to allow for public engagement, to allow for the subjects of these socially significant pieces of research to become part of the conversation. For feminist researchers taking inspiration from the questions raised by women’s liberation movements and feminist activism, and for activists who change the language of gender politics and give voice to the changing needs of an equal and just society, there is a mutual interest in the creation of shared participatory spaces, and the disruption of a system which defines access to knowledge as a question of wealth, protocols and institutional status.

It would be easy to imagine that the Dark Universe was a malevolent force in the latest Star Wars movie, it’s leaders the enemy of the Federation, or that dark energy had some kind of demonic origin. However sinister it may sound, the dark side is entirely innocent and, in fact, it comprises 95% of our Universe.

To give this perspective, Earth is an almost infinitesimal speck in the cosmos. It orbits the Sun, one of billions of stars, swirling around and bound together to form our galaxy, the Milky Way. Moreover, there are billions of galaxies in our Universe, each boasting their own hoard of stars and planets! Observational cosmology tells us that these structures, that are made of particles whose physics we understand, only constitute about 5% of everything in the Universe. The rest is dark matter and dark energy.

Dark matter is a special type of matter that neither emits nor interacts with light, but plays an important role in the story of our Universe. More than three quarters of the mass in our Milky Way galaxy (and other galaxies) is the invisible dark matter, rather than the stars and the planets. Therefore, the dark matter creates a large gravitational effect and acts as the glue holding our galaxies together.

Dark energy is even more mysterious. It is a form of energy that drives the accelerated expansion of our Universe. That is, our observations reveal that while stars stay tightly bound in galaxies, as cosmic time marches on the galaxies themselves are moving further away from each other, and our best theory holds dark energy responsible. While we can’t see these entities, we infer that they exist from their effect on things we can see.

It may sound like cosmologists have the Universe sussed, but there are cracks in our Standard Cosmological Model. While we understand the effect of dark matter in the universe, particle physicists are yet to detect its particle in their giant dark matter net experiments. On the other hand the best theory for dark energy, as predicted by quantum physics, is starkly wrong. To put it politely, there is much work to be done! It is possible that we are missing something in our theory of gravity- Einstein’s General Relativity- and may need to invoke some new physics in order to solve the dark energy phenomenon. That is, just as Newtonian gravity, which satisfies experiments on Earth, was revolutionised by Einstein’s theory in order to explain measurements in the solar system, perhaps we need another upgrade to explain even larger-scale observations. We focus on observing how dark matter changes over cosmic time, which sheds light on how dark energy evolves and allows us to test gravity on cosmological scales.

Cosmology has a vast toolbox of independent methods to understand the nature of the Dark Universe and to test the laws of gravity. Techniques include measurements of the brightness of supernovae- the explosive ends of binary pairs of unequal mass stars; exquisite observations of the Cosmic Microwave Background-temperature fluctuations across the sky from the light emitted in very early universe, just 380 000 years after the Big Bang; charting the distant Universe by obtaining precise velocities of and distances to galaxies; and meticulously measuring the shapes of distant galaxies. The latter is called weak gravitational lensing.

Weak gravitational lensing

As we observe a distant galaxy, we collect its light in our telescopes after it has journeyed across the Universe. According to General Relativity, dark matter, like any massive structure, warps the very fabric of the Universe, space-time, as depicted by the grid in the image below. The path that the light travels along, indicated by an arrow, also gets bent with the space-time and as such, the image of the galaxy that we capture appears distorted. The presence of dark matter or massive structures along the line of sight has the effect of lensing the galaxy- making it appear more elliptical in our images and inducing a coherent alignment among nearby galaxies.

A depiction of weak gravitational lensing. As light from distant galaxies travels towards us, it passes by massive structures of dark matter, shown here as grey spheres. Dark matter’s gravity curves the local space-time as well as the path that the light follows. This curvature distorts the images of the background galaxies that we then observe, with the amount of distortion depending on the distribution of dark matter along the light path. By measuring this distortion, we can infer the size and location of invisible massive structures (dotted circles). Image credit; APS/Alan Stonebraker; galaxy images from STScI/AURA, NASA, ESA, and the Hubble Heritage Team.

The stronger the average galaxy ellipticity is in a patch of sky, the more dark matter there is in that region of the Universe, assuming galaxies are in reality, randomly oriented. Therefore, the induced ellipticity of the galaxies is a faint signature of dark matter inscribed across the Universe. If we can measure this alignment to extreme precision, and combine with the equations of General Relativity, we can infer the location and properties of the matter- both visible and dark- between us and the galaxies. By mapping the evolution of the dark-matter structures with cosmic history and documenting the accelerating expansion of space and time, we learn about dark energy.

I work as part of a European team, called the Kilo-Degree Survey, imaging a 5% chunk of the sky a few hundred times the size of the full moon. We have measured the positions and shapes of tens of millions galaxies, as the universe was when (at most) half its current age. While this sounds wildly impressive, we are only now seeing the tip of the iceberg for what is required to truly understand our Universe. That is because while gravitational lensing is a powerful cosmological technique, it is extremely technologically challenging. The typical distortion induced by dark matter as a galaxy’s light travels through the universe, is only enough to alter the shape of that galaxy by less than 1%. As the lensing effect is weak, in order to detect it we need to analyse the images of millions of galaxies. This entails a data challenge, necessitating rapid processing of petabytes of data. A scientific hurdle arises as the weak lensing distortions are significantly smaller than the distortions that arise in the last moments of the the light’s journey. Due to the effect of the Earth’s atmosphere and our imperfect telescopes and detectors, instead of measuring the shapes of galaxies in images that are beautifully resolved like the Hubble Space Telescope image below, in large lensing surveys, galaxies can appear as fuzzy blobs that only span a few pixels. Just to up the ante, the terrestrial effects change between and throughout the night’s observations as the wind, temperature and weather vary, even in the exquisite conditions of the mountaintops of the Atacama Desert, Chile, where lensing data is often collected. In order to isolate the dark matter signature, the nuisance distortions are modelled to extremely high precision and then inverted, allowing an accurate recovery of the cosmological signal. Further complications arise from the physics of the galaxies. They have an intrinsic ellipticity and dynamical processes that we do not perfectly understand, but must also factor into our calculations.

Hubble Space Telescope image of a cluster of galaxies called Abell 1689. The larger yellow galaxies are members of this massive galaxy cluster, bound within a dense clump of dark matter that gravitationally distorts the space and time around the cluster. The small blue objects are galaxies that are behind the cluster, whose light path has become bent as it journeys towards Earth, passing by the cluster. Gravitational lensing effectuates the giant curved blue arcs that you can see surrounding Abell 1689- the distorted images of the distant galaxies . The five blue dots with rainbow crosses are just stars in our own Milky Way Galaxy. Image credit: NASA/ESA/STScI.

The Kilo-Degree Survey, as well as similar American and Japanese experiments, act as stepping stones and a training ground for an epic coming decade for observational cosmologists. We are at the dawn of several major international projects that will survey the sky to greater depths and resolution than ever before. The Large Synoptic Survey Telescope will image the entire Southern sky every few nights, building the deepest and largest map of our cosmos, the Euclid satellite will survey the sky from space, eradicating the worry of Earth’s atmosphere and the the Dark Energy Spectroscopic Instrument will delivery extremely precise locations and velocities of over 30 million galaxies. I look forward to helping these projects to map the distant Universe, trace the evolution of the dark matter and dark energy from 10 billion years ago to the present day and in doing so, bringing us closer to fathoming the other 95% of our Universe: the dark side.

It is a humbling field that asks what the Universe is made of and how its structure evolved for the formation of galaxies and our existence. In our insignificant snippet in the grand story of the Universe, it is remarkable that technology allows us to observe objects at distances beyond our comprehension and that our diverse range of measurements even vaguely fit a consistent model.

Building better bread: Using genetics to study senescence and nutrient content in wheat.

Wheat provides over 20% of the calories consumed worldwide, the second most of any crop after rice (1). Nearly all of us will eat wheat in one form or another every day—staple foods like bread and pasta as well as our favourite treats, from cake and biscuits to certain types of beer. For many cultures, wheat has been essential for thousands of years – it was originally domesticated around 10,000 years ago. The wheat we eat today is descended from 3 different kinds of wild grasses which crossed together at different times to produce the wild ancestor of wheat (Figure 1)(2). Some of us can take it for granted now that we’ll be able to pop down to the corner shop and pick up a loaf of bread at a moment’s notice, but it took thousands of years of selection by farmers to get to the wheat that we’d recognise today.

Figure 1: Wheat originated from two separate crosses between wild grasses. The first occurred around 400,000 years ago, producing wild emmer. Wild emmer then crossed with a different grass around 10,000 years ago. This final cross produced Triticum aestivum, which would be domesticated into bread wheat by humans. At each cross, the genomes of the wild grasses were combined, resulting in Triticum aestivum containing 3 separate genomes (shown as “AABBDD”, with each letter corresponding to one of the ancestral genomes). Figure courtesy of Dr. Cristobal Uauy.

This process of selection was accelerated in the mid-1900s, during the period called the “Green Revolution.” A combination of research into better breeding techniques and new chemical fertilizers, among other factors, contributed to the substantial increase in yield seen during this period. One critical change involved reducing the height of wheat plants which allowed more energy from photosynthesis to be moved into the grain rather than being stored in the leaves and stems of the plants. The yield increases that came about due to the Green Revolution were essential to keep up with the demands of the growing world population.

Most of the work during the Green Revolution was focused on increasing yield alone, boosting the calories that could be extracted from a single field of wheat. But the benefits of wheat extend far beyond calories along. Perhaps surprisingly, wheat provides 25% of the global protein intake (1). Most of us would think of meat or beans as our main sources of protein, but as a staple crop wheat is essential for our protein intake. The nutrients present in the wheat grain, like iron and zinc, are also essential in our diet.

Campaigns to eradicate hunger have had unprecedented success in recent years, and over 89% of the world’s population are able to obtain enough calories for their basic needs (3). Yet increasingly it is the nutrient content of our diets that is leading to the growing health crises globally. At one extreme, malnutrition, defined as the lack of essential nutrients in a diet that has sufficient calories, is one of the leading causes of childhood stunting (3). At the other extreme, obesity in both childhood and adulthood is more common, partly a result of highly calorific food with poor nutritional value becoming so easily available.

Quality Control

During the development of wheat, the period of growth known as “senescence” is critical in regulating the amounts of proteins and nutrients in the developing grain. This is the period where wheat changes from its living, green state to the dead, yellowing state that is so familiar to us at the end of summer. As the leaves die, the molecules in the leaf start to break down and the elements that make up these molecules are transported from the leaves into the developing grain. At the same time, proteins and carbohydrates are also being remobilised from the leaves and moved to the grain. It’s this movement of nutrients and protein that is essential in establishing the quality of the grain. Different levels of protein determine what the grain can be used for. Bread making requires high-protein flour—this protein makes gluten which creates the structure of bread. At the bottom end of the scale, lower quality wheat can be used as feed for livestock and poultry. However, while increased quality is desired, historically a trade-off has been seen between wheat quality and yield (Figure 2).

Figure 2: Increasing quality and yield often leads to a trade-off. As senescence moves later, yield tends to increase, while quality (such as protein and nutrient levels) tends to decrease. The reverse is found with earlier senescence. This leads to a balancing act with the timing of senescence—how can you maximise both yield and quality?

My research is focused on understanding how the process of senescence is controlled in wheat in the hope that we can use this knowledge to increase the nutritional quality of wheat grains. I’m particularly interested in studying genes that are involved in regulating senescence. These genes are called transcription factors, and they act as master regulators in the cell. Transcription factors are able to bind to DNA and influence the expression of other genes. Oftentimes, changing how a transcription factor is expressed can have a large impact on many other downstream targets.

Previous work found a specific transcription factor, known as NAM-B1, which promoted the onset of senescence (4). When this transcription factor wasn’t active, senescence in wheat was significantly delayed (Figure 3). This delayed senescence was also correlated to a drop in the nutritional content of the wheat grain. This suggested that the timing of senescence could directly influence the levels of nutrients and proteins in the grain. Notably, grain size was not affected by the change in nutrient content and senescence timing, suggesting that studying the NAM-B1 gene might provide insight into how to break the trade-off between quality and yield.

Figure 3: Reducing the action of NAM-B1 (left) leads to delayed senescence in wheat compared to the wild-type plant (right). Panel from (4).

I’m now trying to identify new transcription factors that also regulate the timing of senescence. One way that we’re approaching this question is to look for proteins that interact with NAM-B1. We know that the NAM-B1 transcription factor is only functional when it is bound to another transcription factor in the same family, called NACs. This partner might be another copy of itself, or it could possibly be a different NAC transcription factor entirely. We hypothesised that NAC transcription factors that bind NAM-B1 might also regulate senescence. To study this, we can use different experimental techniques in species as varied as yeast and Nicotiana benthamiana, a relative of tobacco, to look for proteins that can bind to NAM-B1.

Once I’ve identified proteins that bind to NAM-B1, the next question is what these proteins do in the wheat plant. A recently developed resource, the wheat TILLING population, has started to make this process much quicker and easier (5). This is a large set of different lines of wheat that have been mutated by a chemical known as ethyl methanesulfonate (or EMS). This chemical leads to specific single-base-pair changes in the DNA sequence. This means that, in at least one of the thousands of different wheat lines, you’re very likely to find a mutation that knocks-out the action of your favourite gene. All of the mutated wheat lines in this TILLING population have had their genes sequenced. This means that all of the mutations in the genes have been identified and catalogued. Now it’s very easy for us to search for mutations in a gene we’re interested in, and we can order the lines we want online.

After identifying mutations in the genes I’m interested in, I then need to start making crosses before I can look at the effect. This is because, unlike us, wheat is a polyploid. This means that wheat has three different genomes, a legacy of the way wheat was domesticated from three different wild grasses (Figure 1). One of the big effects of this is that there are usually at least 3 copies of each gene—one for each genome. So a mutation in one of the three genes may not actually make any difference to the plant, as the other two copies can compensate. As a result, it’s very important to make crosses so that all of the copies of the genes have mutations in them. Otherwise it would be very easy to think that a gene isn’t important as a single mutation doesn’t cause any change. This polyploidy is one of the reasons that breeding in wheat has historically been so difficult, as random mutations are unlikely to happen more than one copy and are thus often obscured—what can be called the “hidden variation” (2).

Once you’ve found your candidate genes, identified mutated lines, and made all of your crosses, you’re ready to see if your gene has an effect. I do most of my trials in the greenhouse, so that I can look at my plants on a smaller scale than you would need for the field. By scoring for senescence onset and progression in my mutant plants, I’m able to identify whether my mutants influence the timing of senescence (Figure 4). This is quite important as earlier senescence may lead to increased nutrient content, so it’s a useful proxy as it’s quick and cheap to study. After identifying mutant lines that have an interesting phenotype (in this case variation in senescence timing), I can directly measure the levels of nutrients such as iron and zinc in the grain. This is an essential final step to see how the variation in senescence timing correlates with the grain nutrient content.

Figure 4: Variation in chlorophyll breakdown in mutant plants. The mutant plant on the left has yellow leaves, indicating that the chlorophyll is being broken down much earlier than the wild-type plant on the right. This suggests that certain pathways associated with senescence are being activated earlier in the mutant plant.

Currently in my research, I’m still in the process of scoring my plants for senescence and identifying interesting mutants. Wheat takes quite a long time to grow in the greenhouse—about 4 months from seed to seed—so it takes quite an investment of time to get through the generations needed for crossing. A new technique for wheat growth called, appropriately, “Speed Breeding” is starting to change this (6). By growing wheat under special LED lighting for 22 hours a day in rooms where the environment is kept constant we can reduce the time for each generation to between 8 and 10 weeks. This is a significant time saving, and is incredibly powerful particularly for generation of new lines from crosses.

It still remains to be seen whether the proteins that I found to interact with NAM-B1 play a significant role in regulating senescence. There are some promising initial results from the mutants I’ve developed, but it will require another few sets of experiments in the glasshouse and the field before I’m sure we’ve honed in on good candidates. Watch this space!