Blages

Sunday, August 29, 2010

So, I like to imagine that people are reading this blog, that they are intrigued by its content, and it's got them wondering, "Why is this blog called 'Lost in Transcription'?" As I noted in the pilot, the aim here is to talk about poetry and science, particularly those parts of science that are related to my own research, where I can claim some degree of expertise. The title is meant to bridge between these two worlds.

First, in the literary domain, it is a play off of the phrase "lost in translation," which appears any time there is a discussion of either literature, of cultural differences, or of Scarlett Johansson that goes on for longer than ten minutes. Within poetry, its two most famous appearances are as the title of a James Merrill poem (perhaps the best poem ever written about working a jigsaw puzzle), and in a quotation attributed to Robert Frost, that "Poetry is what gets lost in translation."

Within genetics, translation refers to the process by which an RNA sequence is read to generate an amino acid sequence. In a related process called transcription, DNA serves as a template for the synthesis of RNA. One of the topics that I work on is genomic imprinting, where one of the two gene copies in a cell has its transcription silenced. So, in a strictly literal sense, a better title would have been something like "loss of transcription," but if you are willing to stretch with me, we could imagine that at a genetic locus that is subject to genomic imprinting, the genetic information from one of your two parents is Lost in Transcription.

So, it is standard conventional wisdom that people are liberal when they're young, and conservative when they're old. To the extent that we interpret "liberal" as "eager for change" and "conservative" as "against change," this trajectory is only natural. Especially in the modern world, where things are changing all the time, it may simply come down to a difference in experience: you're less likely to pine for the way the world was thirty years ago if you weren't alive thirty years ago.

But what I am really interested in here is the apparent trend where people become more conservative with respect to economic policies. In this context, the argument about familiarity does not seem to hold. In the United States, the government's economic policies have been trending more conservative for decades, and the familiarity argument would predict that older people should be, on average, more liberal. However, there is a different aspect of familiarity that may be relevant, as it pertains to our beliefs about human nature.

A key aspect of the economic debate between liberals and conservatives is a difference in the assumptions they make about how people will behave when left to their own devices. If you will forgive me for painting complex things with a simple brush, the cartoon versions of these are something like this. Conservatives believe that people are inherently self-serving and lazy, and will work hard only if they are given tangible incentive to do so. From this perspective, progressive tax structures and government programs like welfare and social security are problematic because they take away the incentive to earn money. Liberals, by contrast, believe that everyone is trying hard, that inequality comes largely from societal structures that are beyond individuals' control, and that people should not be punished for the inherent unfairness of society (except maybe those at the top of the pay scale, who have benefitted most from those inequalities).

Now, the most obvious difference between young people and old people is that old people have a lot more experience with other people than young people do. That is, we tend to start out with positive views about human nature, but over time we interact with more and more people, they disappoint us, and we become progressively more cynical. Many conservatives see this as evidence in favor of their position: we start naive, and become conservative when we learn what people are actually like. However, I want to suggest a different explanation, having to do with asymmetries in how we perceive positive and negative deviations from our expectation. Intuitively, this comes down to the fact that we notice whenever we get stopped by a red light, but often don't notice when we hit a green light. Therefore, we perceive that stoplights are red more often than they actually are.

This also happens in the economic domain, as has been extensively documented by experimental economists in a variety of "public goods" games. The basic structure of these games is as follows. You have a group of people, say 10, and you give each of them some money, say $10. Each person can then contribute a fraction of their $10 to the "pot." The money in the pot is the multiplied by some factor, say 5, and then distributed equally among the ten players. So, if no one contributes anything, everyone gets to keep the $10 they were given at the beginning. If everyone contributes the full $10, the pot has $100, which is multiplied by 5 to give $500, which is distributed back to the players, and everyone walks away with $50.

The ideal thing for the group as a whole is for everyone to contribute the maximal amount. However, the ideal thing for the individual is to contribute nothing (and to hope that everyone else contributes the maximum). For example, if I contribute nothing, and everyone else contributes $10, I walk away with $55, and everyone else walks away with $45. If I contribute $10, and no one else contributes anything, I'll get $5, and everyone else will get $15. From an "economic rationality," "Nash equilibrium" perspective, the thing to do is contribute nothing. However, this is not what happens in practice.

In a wide variety of experimental setups, what people actually do is contribute about 50% of what they are initially given. So, the typical outcome in our experiment would be that everyone contributes about $5, which makes $50 in the pot, which is multiplied to $250, and everyone walks away with $30 (the $5 they kept plus $25 from the pot). Across a broad range of cultures, ages, quantities of money, etc., people come into these experiments with a somewhat liberal perspective, as they seem to both trust the good will of the other players, and care about the results for the group as a whole.

However, if we play the game over and over again, an interesting thing happens: the average contribution gradually declines, until eventually, no one is contributing anything to the pot. Based on interviews with the participants in these games, economists believe that they understand this trend. Let's say that one person contributes $5, which is the average among the group, but some people contribute $4, and some $6. This person will not really think about the people who gave $5 of $6, but will think a lot about the people who gave $4, get pissed off, and reduce their contribution in the next round. While it is mathematically trivial that people, on average, contribute the average amount to the pot, it seems to be psychologically true that people perceive themselves on average as having made an above-average contribution.

What I think is that something analogous happens over the course of the lifetime of an individual. We meet some people who are hard working, and some who are lazy, but there is this perceptual bias that means that the lazy, selfish people we meet weigh more heavily in our developing opinions about "what people are like."

The other interesting finding from these experiments is that it is remarkably easy to reset the spiral of cynicism. If you take the participants out of the room, give them a cup of coffee, and let them use the bathroom, when they go back in, they often go right back to contributing 50% on average. So, note to Democratic lawmakers, if you can figure out how to let the country drink a collective cup of coffee and use the collective bathroom, you may find a dramatic increase in support for social programs and a progressive tax structure.

Sunday, August 8, 2010

So, one of the things that I study is genomic imprinting. What is that, exactly?

Even if you're not a biologist, you are probably familiar with the fact that, for most of your genes, we carry two different copies, or alleles. You get one of those alleles from your mom, and one from your dad. Those two alleles could be the same (have identical DNA sequences) or different (usually only at a small number of positions within DNA sequence). If they are different, then the consequences of those alleles on your traits, like how tall you are or what color your eyes are, are determined by the dominance relationship between those two alleles. For example, the main allele responsible for red hair (at the MC1R locus) is recessive in relationship to alleles for brown or black hair. So, if you have only one copy of the red-hair allele, you will probably have dark hair. Importantly, in terms of what follows, it does not depend whether the recessive red-hair allele you have came from your mother or father.

If you are a biologist, you already knew all of that, but you may or may not be familiar with imprinted genes. About one percent (or possibly more) of our genes are imprinted. For these genes, it does matter which allele came from your mother and which one came from your father. That's because imprinted genes retain a chemical memory of which parent they came from, and function differently depending on their parental origin. More specifically, at an imprinted locus, alleles are subjected to epigenetic modifications in the germ lines (ovaries or testes). These epigenetic modifications can be chemical modifications applied directly to the DNA itself, or modifications to proteins that are closely associated with the DNA. These modifications alter how the allele functions, without modifying the DNA sequence itself. The key thing is that, for imprinted genes, the epigenetic modifications that are established in the male germ line are different from those established in the female germ line. So the allele that came from your father will function differently from the allele that came from your mother, even if the DNA sequences are identical.

In the simplest cases, one of the two alleles is inactivated, or turned off. The effect of that gene on a given trait, then, depends only on the active allele. To return to the red-hair example, imagine that the MC1R locus was imprinted (which it is not, as far as we know), and that only the paternally inherited copy was expressed. Now, if you had one copy of the red-hair allele, and one of the more common dark hair allele, you would not necessarily have dark hair. Your hair would be dark if your red-hair allele came from your mom, but if it came from your dad, your hair would be red.

Of course, as with all things in biology, once you start looking at the details, everything becomes a lot messier and more confusing. But, that is the basic gist.

Genomic imprinting was one of the biggest surprises to come out of molecular biology in the past few decades. Both the origins of imprinting of particular genes, and the effect of imprinting on the evolution of those genes, are interesting questions that we will return to in future posts. At some point along the way, we will get deeper into those messy and confusing details.

Support Lost in Transcription!

Got something you need from Amazon? Like maybe this Yodeling Pickle? Click through here and buy whatever you were going to get anyway, and you'll be supporting independent bloggers everywhere! (Well, here, anyway.)