Hearing about the HannoverGEN project made me feel envious and excited. Envious, because I wish my high school had offered the kind of hands-on molecular biology training provided to high school students in Hannover, the capital of the German state of Niedersachsen. Excited, because it reminded me of the joy I felt when I first isolated DNA and ran gels after restriction enzyme digests during my first year of university in Munich. I knew that many of the students at the HannoverGEN high schools would be similarly thrilled by their laboratory experience and perhaps even pursue careers as biologists or biochemists.

What did HannoverGEN entail? It was an optional pilot program initiated and funded by the state government of Niedersachsen at four high schools in the Hannover area. Students enrolled in the HannoverGEN classes would learn to use molecular biology tools typically reserved for college-level or graduate school courses in order to study plant genetics. Some of the basic experiments involved isolating DNA from cabbage or how learning how bacteria transfer genes to plants, more advanced experiments enabled the students to analyze whether or not the genome of a provided maize sample had been genetically modified. Each experimental unit was accompanied by relevant theoretical instruction on the molecular mechanisms of gene expression and biotechnology as well as ethical discussions regarding the benefits and risks of generating genetically modified organisms (“GMOs”). The details of the HannoverGEN program are only accessible through the the Wayback Machine Internet archive because the award-winning educational program and the associated website were shut down in 2013 at the behest of German anti-GMO activist groups, environmental activists, Greenpeace, the Niedersachsen Green Party and the German organic food industry.

Why did these activists and organic food industry lobbyists oppose a government-funded educational program which improved the molecular biology knowledge and expertise of high school students? A press release entitled “Keine Akzeptanzbeschaffung für Agro-Gentechnik an Schulen!” (“No Acceptance for Agricultural Gene Technology at Schools“) in 2012 by an alliance representing “organic” or “natural food” farmers accompanied by the publication of a critical “study” with the same title (PDF), which was funded by this alliance as well as its anti-GMO partners, gives us some clues. They feared that the high school students might become too accepting of biotechnology in agriculture and that the curriculum did not sufficiently highlight all the potential dangers of GMOs. By allowing the ethical discussions to not only discuss the risks but also mention the benefits of genetically modifying crops, students might walk away with the idea that GMOs could be beneficial for humankind. The group believed that taxpayer money should not be used to foster special interests such as those of the agricultural industry which may want to use GMOs.

A response by the University of Hannover (PDF), which had helped develop the curriculum and coordinated the classes for the high school students, carefully analyzed the complaints of the anti-GMO activists. The author of the anti-HannoverGEN “study” had not visited the HannoverGEN laboratories, nor had he had interviewed the biology teachers or students enrolled in the classes. In fact, his critique was based on weblinks that were not even used in the curriculum by the HannoverGEN teachers or students. His analysis ignored the balanced presentation of biotechnology that formed the basis of the HannoverGEN curriculum and that discussing potential risks of genetic modification was a core topic in all the classes.

Unfortunately, this shoddily prepared “study” had a significant impact, in part because it was widely promoted by partner organizations. Its release in the autumn of 2012 came at an opportune time for political activists because Niedersachsen was about to have an election. Campaigning against GMOs seemed like a perfect cause for the Green Party and a high school program which taught the use of biotechnology to high school students became a convenient lightning rod. When the Social Democrats and the Green Party formed a coalition after winning the election in early 2013, nixing the HannoverGEN high school program was formally included in the so-called coalition contract. This is a document in which coalition partners outline the key goals for the upcoming four year period. When one considers how many major issues and problems the government of a large German state has to face, such as healthcare, education, unemployment or immigration, it is mind-boggling that de-funding a program involving only four high schools received so much attention that it needed to be anchored in the coalition contract. In fact, it is a testimony to the influence and zeal of the anti-GMO lobby.

Once the cancellation of HannoverGEN was announced, the Hannover branch of Greenpeace also took credit for campaigning against this high school program and celebrated its victory. The Greenpeace anti-GMO activist David Petersen said that the program was too cost intensive because equipping high school laboratories with state-of-the-art molecular biology equipment had already cost more than 1 million Euros. The previous center-right government which had initiated the HannoverGEN project was planning on expanding the program to even more high schools because of the program’s success and national recognition for innovative teaching. According to Petersen, this would have wasted even more taxpayer money without adequately conveying the dangers of using GMOs in agriculture.

The scientific community was shaken up by the decision of the new Social Democrat-Green Party coalition government in Niedersachsen. This was an attack on the academic freedom of schools under the guise of accusing them of promoting special interests while ignoring that the anti-GMO activists were representing their own special interests. The “study” attacking HannoverGEN was funded by the lucrative “organic” or “natural food” food industry! Scientists and science writers such as Martin Ballaschk or Lars Fischer wrote excellent critical articles stating that squashing high-quality, hand-on science programs could not lead to better decision-making. How could ignorant students have a better grasp of GMO risks and benefits than those who receive relevant formal science education and thus make truly informed decisions? Sadly, this outcry by scientists and science writers did not make much of a difference. It did not seem that the media felt this was much of a cause to fight for. I wonder if the media response would have been just as lackluster if the government had de-funded a hands-on science lab to study the effects of climate change.

In 2014, the government of Niedersachsen then announced that they would resurrect an advanced biology laboratory program for high schools with the generic and vague title “Life Science Lab”. By removing the word “Gen” from its title which seems to trigger visceral antipathy among anti-GMO activists, de-emphasizing genome science and by also removing any discussion of GMOs from the curriculum, this new program would leave students in the dark about GMOs. Ignorance is bliss from an anti-GMO activist perspective because the void of scientific ignorance can be filled with fear.

From the very first day that I could vote in Germany during the federal election of 1990, I always viewed the Green Party as a party that represented my generation. A party of progressive ideas, concerned about our environment and social causes. However, the HannoverGEN incident is just one example of how the Green Party is caving in to ideologies, thus losing its open-mindedness and progressive nature. In the United States, the anti-science movement, which attacks teaching climate change science or evolutionary biology at schools, tends to be rooted in the right wing political spectrum. Right wingers or libertarians are the ones who always complain about taxpayer dollars being wasted and used to promote agendas in schools and universities. But we should not forget that there is also a different anti-science movement rooted in the leftist and pro-environmental political spectrum – not just in Germany. As a scientist, I feel that it is becoming increasingly difficult to support the Green Party because of its anti-science stance.

I worry about all anti-science movements, especially those which attack science education. There is nothing wrong with questioning special interests and ensuring that school and university science curricula are truly balanced. But the balance needs to be rooted in scientific principles, not political ideologies. Science education has a natural bias – it is biased towards knowledge that is backed up by scientific evidence. We can hypothetically discuss dangers of GMOs but the science behind the dangers of GMO crops is very questionable. Just like environmental activists and leftists agree with us scientists that we do not need to give climate change deniers and creationists “balanced” treatment in our science curricula, they should also accept that much of the “anti-GMO science” is currently more based on ideology than on actual scientific data. Our job is to provide excellent science education so that our students can critically analyze and understand scientific research, independent of whether or not it supports our personal ideologies.

Two scientific papers that were published in the journal Nature in the year 2000 marked the beginning of engineering biological circuits in cells. The paper “Construction of a genetic toggle switch in Escherichia coli” by Timothy Gardner, Charles Cantor and James Collins created a genetic toggle switch by simultaneously introducing an artificial DNA plasmid into a bacterial cell. This DNA plasmid contained two promoters (DNA sequences which regulate the expression of genes) and two repressors (genes that encode for proteins which suppress the expression of genes) as well as a gene encoding for green fluorescent protein that served as a read-out for the system. The repressors used were sensitive to either selected chemicals or temperature. In one of the experiments, the system was turned ON by adding the chemical IPTG (a modified sugar) and nearly all the cells became green fluorescent within five to six hours. Upon raising the temperature to activate the temperature-sensitive repressor, the cells began losing their green fluorescence within an hour and returned to the OFF state. Many labs had used chemical or temperature switches to turn gene expression on in the past, but this paper was the first to assemble multiple genes together and construct a switch which allowed switching cells back and forth between stable ON and OFF states.

The same issue of Nature contained a second land-mark paper which also described the engineering of gene circuits. The researchers Michael Elowitz and Stanislas Leibler describe the generation of an engineered gene oscillator in their article “A synthetic oscillatory network of transcriptional regulators“. By introducing three repressor genes which constituted a negative feedback loop and a green fluorescent protein as a marker of the oscillation, the researchers created a molecular clock in bacteria with an oscillation period of roughly 150 minutes. The genes and proteins encoded by the genes were not part of any natural biological clock and none of them would have oscillated if they had been introduced into the bacteria on their own. The beauty of the design lay in the combination of three serially repressing genes and the periodicity of this engineered clock reflected the half-life of the protein encoded by each gene as well as the time it took for the protein to act on the subsequent member of the gene loop.

Both papers described the introduction of plasmids encoding for multiple genes into bacteria but this itself was not novel. In fact, this has been a routine practice since the 1970s for many molecular biology laboratories. The panache of the work lay in the construction of functional biological modules consisting of multiple genes which interacted with each other in a controlled and predictable manner. Since the publication of these two articles, hundreds of scientific papers have been published which describe even more intricate engineered gene circuits. These newer studies take advantage of the large number of molecular tools that have become available to query the genome as well as newer DNA plasmids which encode for novel biosensors and regulators.

Synthetic biology is an area of science devoted to engineering novel biological circuits, devices, systems, genomes or even whole organisms. This rather broad description of what “synthetic biology” encompasses reflects the multidisciplinary nature of this field which integrates ideas derived from biology, engineering, chemistry and mathematical modeling as well as a vast arsenal of experimental tools developed in each of these disciplines. Specific examples of “synthetic biology” include the engineering of microbial organisms that are able to mass produce fuels or other valuable raw materials, synthesizing large chunks of DNA to replace whole chromosomes or even the complete genome in certain cells, assembling synthetic cells or introducing groups of genes into cells so that these genes can form functional circuits by interacting with each other. Synthesis in the context of synthetic biology can signify the engineering of artificial genes or biological systems that do not exist in nature (i.e. synthetic = artificial or unnatural), but synthesis can also stand for integration and composition, a meaning which is closer to the Greek origin of the word. It is this latter aspect of synthetic biology which makes it an attractive area for basic scientists who are trying to understand the complexity of biological organisms. Instead of the traditional molecular biology focus on studying just one single gene and its function, synthetic biology is engineering biological composites that consist of multiple genes and regulatory elements of each gene. This enables scientists to interrogate the interactions of these genes, their regulatory elements and the proteins encoded by the genes with each other. Synthesis serves as a path to analysis.

One goal of synthetic biologists is to create complex circuits in cells to facilitate biocomputing, building biological computers that are as powerful or even more powerful that traditional computers. While such gene circuits and cells that have been engineered have some degree of memory and computing power, they are no match for the comparatively gigantic computing power of even small digital computers. Nevertheless, we have to keep in mind that the field is very young and advances are progressing at a rapid pace.

One of the major recent advances in synthetic biology occurred in 2013 when an MIT research team led by Rahul Sarpeshkar and Timothy Lu at MIT created analog computing circuits in cells. Most synthetic biology groups that engineer gene circuits in cells to create biological computers have taken their cues from contemporary computer technology. Nearly all of the computers we use are digital computers, which process data using discrete values such as 0’s and 1’s. Analog data processing on the other hand uses a continuous range of values instead of 0’s and 1’s. Digital computers have supplanted analog computing in nearly all areas of life because they are easy to program, highly efficient and process analog signals by converting them into digital data. Nature, on the other hand, processes data and information using both analog and digital approaches. Some biological states are indeed discrete, such as heart cells which are electrically depolarized and then repolarized in periodical intervals in order to keep the heart beating. Such discrete states of cells (polarized / depolarized) can be modeled using the ON and OFF states in the biological circuit described earlier. However, many biological processes, such as inflammation, occur on a continuous scale. Cells do not just exist in uninflamed and inflamed states; instead there is a continuum of inflammation from minimal inflammatory activation of cells to massive inflammation. Environmental signals that are critical for cell behavior such as temperature, tension or shear stress occur on a continuous scale and there is little evidence to indicate that cells convert these analog signals into digital data.

Most of the attempts to create synthetic gene circuits and study information processing in cells have been based on a digital computing paradigm. Sarpeshkar and Lu instead wondered whether one could construct analog computation circuits and take advantage of the analog information processing systems that may be intrinsic to cells. The researchers created an analog synthetic gene circuit using only three proteins that regulate gene expression and the fluorescent protein mCherry as a read-out. This synthetic circuit was able to perform additions or ratiometric calculations in which the cumulative fluorescence of the mCherry was either the sum or the ratio of selected chemical input concentrations. Constructing a digital circuit with similar computational power would have required a much larger number of components.

The design of analog gene circuits represents a major turning point in synthetic biology and will likely spark a wave of new research which combines analog and digital computing when trying to engineer biological computers. In our day-to-day lives, analog computers have become more-or-less obsolete. However, the recent call for unconventional computing research by the US Defense Advanced Research Projects Agency (DARPA) is seen by some as one indicator of a possible paradigm shift towards re-examining the value of analog computing. If other synthetic biology groups can replicate the work of Sarpeshkar and Lu and construct even more powerful analog or analog-digital hybrid circuits, then the renaissance of analog computing could be driven by biology. It is difficult to make any predictions regarding the construction of biological computing machines which rival or surpass the computing power of contemporary digital computers. What we can say is that synthetic biology is becoming one of the most exciting areas of research that will provide amazing insights into the complexity of biological systems and may provide a path to revolutionize biotechnology.

The cells in the body of a healthy person all have the same DNA, right? Not really! It has been known for quite some time now that there are genetic differences between cells within one person. The expression to describe these between-cell differences is “somatic mosaicism“, because cells can represent a mosaic of genetic profiles, even within a single organ. During embryonic development, all cells are derived from one fertilized egg and ought to be genetically identical. However, during every cell division errors and differences during DNA replication can occur and this can lead to genetic differences between cells. This process not only occurs during embryonic development but continues after birth.

As we age, our cells are exposed to numerous factors such as radiation, chemicals or other stressors which can causes genetic alterations, ranging from single nucleotide mutations to duplications and deletions of large chunks of DNA. Some mutations are known to cause cancer by making a single cell grow rapidly, but not all mutations lead to cancer. Many spontaneous mutations can either result in the death of a cell or do not even impact its function in any significant manner. DNA copy number variations (CNVs) is an expression used to describe a variable copy number of larger DNA segments of one kilobase (kb). Most recent studies on CNVs have compared CNVs between people, i.e. how many CNVs does person A have when compared to person B. It turns out that there may be quite a bit of genetic diversity between people that had previously been overlooked.

A new paper published in the journal Nature takes this one step further. It not only shows that there are significant CNVs between people, but even within a single person. In the study “Somatic copy number mosaicism in human skin revealed by induced pluripotent stem cells“, Alexej Abyzov and colleagues found significant CNVs in induced pluripotent stem cells (iPSCs) that they had generated from the adult skin cells of human subjects. Importantly, most of these CNVs were not the result of reprogramming adult skin cells to the stem cell state. They were already present in the skin fibroblasts obtained from the human subjects. Most analyses of CNVs are performed on whole tissues or biopsies, but not on single cells, which is why so little is known about between cell CNV differences. However, when iPSCs are generated from skin fibroblast cells, they are often derived from a single cell. This enables the evaluation of genetic diversity between cells.

Abyzov and colleagues estimate that 30% of adult skin fibroblasts carry large CNVs. This estimate is based on a very small number of fibroblast samples. It is not clear whether other cells such as neurons or heart cells also have similar CNVs and whether the 30% estimate would hold up in a larger sample. Their work leads to the intriguing question: What percentage of neighboring cells in a single heart, brain or kidney are actually genetically identical? Cell types, such as heart cells or adult neurons cannot be clonally expanded so it may be difficult to determine the genetic diversity within a heart or a brain using the methods employed by Abyzov and colleagues.

What are the implications of this work? On a practical level, this study suggests that it may be important to derive multiple iPSC clones from a subject’s or patient’s skin cells, if one wants to use the iPSCs for disease modeling. This will help control for the genetic diversity that exists among the skin cells. However, a much more profound implication of this work is that we have to think about between-cell diversity within a single organ. We need to develop better tools for how to analyze genetic diversity between individual cells, and more importantly, we have to understand how this genetic diversity impacts health and disease.

The MIT-based researcher Rick Young is one of the world’s top molecular biologists. His laboratory at the Whitehead Institute for Biomedical Research has helped define many of the key principles of how gene expression is regulated, especially in stem cells and cancer cells. At a symposium organized by the International Society for Stem Cell Research (ISSCR), Rick presented some very provocative data today, which is bound to result in controversial discussions about how researchers should assess gene expression.

Ptolemey’s world map from Harmonica Macrocosmica

It has become very common for molecular biology laboratories to use global gene expression analyses to understand the molecular signature of a cell. These global analyses can measure the gene expression of thousands of genes in a single experiment. By comparing the gene expression profiles of different groups of cells, such as cancer cells and their healthy counterparts, many important new genes or new roles for known genes have been uncovered. The Gene Expression Omnibus is a public repository for the huge amount of molecular information that is generated. So far, more than 800,000 samples have been analyzed, covering the gene expression in a vast array of organisms and disease states.

Rick himself has extensively used such expression analyses to characterize cancer cells and stem cells, but at the ISSCR symposium, he showed that most of these analyses are based on the erroneous assumption that the total RNA content in cells remains constant. When the gene expression in cancer cells is compared to that of healthy non-cancer cells, the analysis is routinely performed by normalizing or standardizing the RNA content. The same amount of RNA from cancer cells and non-cancer cells is obtained and the global analyses are able to detect relative differences in gene expression. However, a problem arises when one cell type is generating far more RNA than the cell type it is being compared to.

In a paper that was published today in the journal Cell entitled “Revisiting Global Gene Expression Analysis”, Rick Young and his colleagues discuss their recent discovery that the cancer-linked gene regulator c-Myc increases total gene expression by two to three-fold. Cells expressing the c-Myc gene therefore contain far more total RNA than cells that don’t express it. This means that most genes will be expressed at substantially higher levels in the c-Myc cells. However, if one were to perform a traditional gene expression analysis comparing c-Myc cells versus cells without c-Myc, one would “control” for these differences in RNA amount by using the same amount of RNA for both cell types. This traditional standardization makes a lot of sense; after all, how would one be able to compare the gene expression profile in the two samples, if we loaded different amounts of RNA? The problem with this common-sense standardization is that it misses out on global shifts of gene expression, such as those initiated by potent regulators such as c-Myc. According to Rick Young, one answer to the problem is to include an additional control by “spiking” the samples with defined amounts of known RNA. This additional control would allow us to then analyze if there is also an absolute change in gene expression, in addition to the relative changes that current gene analyses can detect.

In some ways, this seems like a minor technical point, but I think that it actually points to a very central problem in how we perform gene expression analysis, as well as many other assays in cell biology and molecular biology. One is easily tempted to use exciting large scale analyses to study the genome, epigenome, proteome or phenome of cells. These high-tech analyses generate mountains of data and we spend an inordinate amount of time trying to make sense of the data. However, we sometimes forget to question the very basic assumptions that we have made. My mentor Till Roenneberg taught me how important it was to use the right controls in every experiment. The key word here is “right” controls, because merely including controls without thinking about their appropriateness is not sufficient. I think that Rick Young’s work is an important reminder for all of us to continuously re-evaluate the assumptions we make, because such a re-evaluation is a pre-requisite for good research practice.