Dr. Julie, a.k.a. Scientific Chick, brings you insights into what's happening in the world of life sciences. Straight from the scientific source, relevant information you should know about, in plain language.

In a nutshell: An imaging technique, functional magnetic resonance imaging (fMRI), allowed researchers to communicate with patients in a vegetative state. Researchers asked yes/no questions, and the patients answered by thinking of playing tennis for yes or thinking of a house for no. The resulting brain signals from these thoughts could be imaged, interpreted and used for communication.The good: This technique gives great hopes for friends and relatives of patients in minimally conscious states.The bad: The study was heavily criticized: only one patient was tested, and the technique is far from perfect. What’s next? We will no doubt hear more about this in the near future. The first step will be to replicate these findings with a greater number of patients.

In a nutshell: Researchers made synthetic DNA and incorporated it into an empty bacterial cell, thereby creating a fully functioning cell with a man-made genome.The good: This study represents a technical feat and opens new doors in the fields of molecular and cellular biology.The bad: Like transgenic organisms before this, the synthetic cell re-opens the Pandora box of ethical questions.What’s next? Human-engineered cells will probably play a role in gene therapy and in the quest to build tissue in a dish.

In a nutshell: A study of computerized brain training using over 11,000 participants showed that people improved at the tasks they practiced, but this improvement didn’t extend to general cognition.The good: This study urges caution when buying into the brain training craze.The bad: The results may be misleading: it’s not because the researchers didn’t see any improvement that there weren’t any. The brain training could have been inadequate or the researchers could have been measuring the wrong parameters.What’s next? More controlled studies will be needed to determine the effectiveness of the games. Brain training is likely to become increasingly specialized: training for older adults, for children with autism, etc.

In a nutshell: Researchers were able to control brain cells using light, and rescued symptoms of Parkinson’s disease in mice.The good: I said it before and I’ll say it again: optogenetics has the potential to revolutionize medicine.The bad: The technique is quite complex and difficult and so far only possible in small mammals.What’s next? Researchers are already testing in larger mammals and developing new ways to deliver light into the brain.

In a nutshell: Researchers found a relationship between a specific version of a gene and promiscuous sexual behaviors.The good: The study provides new insights into the link between genes and human behavior.The bad: It’s not that simple. What’s next? You can expect more studies of the “this gene does that” type. However, researchers are increasingly interested in how the environment can impact the expression of genes, and the story is bound to get even more complicated.

Monday, December 20, 2010

Separation anxiety isn’t just for babies anymore: one in two dogs is thought to suffer from separation-related behaviors when their owners leave the house. These behaviors take many forms, from chewing your favorite pair of Jimmy Choo’s (Santa Baby… I want Jimmy Choo’s) to yapping uncontrollably until the neighbors call the police. Seeing as this may represent a serious problem in animal welfare, a team of researchers assessed the relationship between separation-related behaviors and the overall moods of dogs.

The researchers used 24 shelter dogs and started by measuring whether each dog suffered from separation anxiety. To do this, a researcher played with a dog for 20 minutes in a designated room. The next day, the dog was taken to the same room, played with for a few minutes, then left alone for five minutes. The dog’s behavior during those five minutes was analyzed and graded as a “separation anxiety score”.

A few days after the test, the researchers conducted another experiment to assess whether each dog had a pessimist or optimist outlook. In order to achieve this, they trained the dogs to learn that when in a given room, a food bowl placed at the very left of the room always had a treat in it, and a food bowl placed at the very right or the room never had a treat in it. Once the dogs learned this, the researchers placed a food bowl right in the middle of the room. Presumably, dogs that ran fast to see if this new bowl has a treat in it were anticipating that it had food in it and were considered to be “optimistic”, whereas dogs who either slowly made their way over or didn’t bother to check it out were considered to be “pessimistic”. It’s kind of a dog version of the glass half-full or half-empty paradigm.

The relevant finding of this article is that the researchers found that dogs who experienced separation anxiety were more likely to be of the “pessimistic” kind. Pessimism is thought to be related to negative moods, and knowing this may help in figuring out how to avoid chewed-on Jimmy Choo’s.

While I thought the study was quirky and interesting, I found it a bit of a stretch to label these dogs as optimistic or pessimistic using such a simple experiment. The researchers themselves owned up to this by saying that “the conscious experience of such a state [optimistic/pessimistic] cannot be known for sure”. When I read the article, I thought maybe the dogs who went for the food bowl in the middle were just more curious than others, and I’m not sure how curiosity relates to optimism (for example, I consider myself to be quite curious, but not necessarily optimistic: I browse the Jimmy Choo website to see what the new styles are, but I don’t envision ever owning a pair). As well, I thought the measure for separation anxiety was a bit weak. While it’s a known experiment, it’s not immediately obvious to me that the behavior of these pound dogs relates to the behaviors you would observe in dogs with a stable home.

Still, it’s a good reminder to keep our pets as happy as possible, especially during the holidays when routines are broken and moods are uneven.

Saturday, December 11, 2010

Christmas party season is officially upon us. The next few weeks are pretty much going to be a long string of turkeys, stuffing, various things made with cranberries and cutesy Christmas cookies à la Martha Stewart. Faced with this, many of us who are concerned with staying trim and not slipping into food comas on a daily basis may be feeling a little apprehensive. Well, fear not! New research published in the journal Science suggests you can eat less by simply… Thinking more about food!

The study looks at the relationship between the concepts of mental imagery (imagining doing things) and habituation (getting used to things). Through mental imagery, imagining things can affect your body and your emotions just as much as the real thing: just thinking about a spider crawling on your neck can lead to the same feeling of tingling and fear as if it was actually happening. The second concept, habituation, refers to the decrease in your body and your mind’s response to a stimulus. For example, your tenth bite of stuffing is not nearly as satisfying as your first. Given these two principles, the researchers asked if you could habituate to a food just by imagining eating it.

The participants in the study were divided into two groups and each group was asked to imagine doing a task. The first group was asked to picture eating 30 M&M’s, one at a time. The second group was asked to picture putting 30 quarters into a laundry machine, one at a time. After this mental imagery task, the participants were each put in front of a bowl of M&M’s and told to eat as much as they wanted as a preparation for a “taste test” later on (obviously, the taste test is fake, it’s just an excuse to get the participants to eat). The researchers then weighed the leftover M&M’s and measured how much each participant had eaten. They found that those who pictured eating M&M’s ate significantly less candies than those who pictured feeding a laundry machine!

The researchers then compared participants who imagined eating only three M&M’s with participants who imagined eating 30. They found that participants who imagined eating more M&M’s ended up actually eating fewer of the real ones. This means that habituation (doing something repeatedly) is key to observe an effect of mental imagery.

So whether your drug of choice is M&M’s, stuffing or cheese balls, you may be able to minimize the holiday damage by doing a little mental exercise. Now if only I could just picture purchasing and wrapping a bunch of presents…Reference: Thought for food: imagined consumption reduces actual consumption. (2010) Morewedge CL et al. Science 330:1530-1533.

Monday, December 6, 2010

The story behind this headline comes from a study of human sexual behaviour. Different people have different sex drives and different sexual behaviours, and we don’t know why, so American researchers set out to solve the mystery.

The researchers enlisted 181 male and female young adults and asked each one for a detailed history of sexual behaviour and relationships (awkward!) and a sample of spit. The spit was used to analyze the participant’s DNA and to look for a specific version of a gene called DRD4 (subsequently dubbed “the slut gene” by the media). The results of this study show that participants who have the specific version of this gene are more promiscuous (researchers actually used the words “one-night stand”) and report more instances of sexual infidelity. Well, there you have it. Free will is overridden by our genes.

How does this work? Your brain’s reward system is called the dopamine system (DRD4 stands for Dopamine Receptor D4), and among other things, it takes care of your motivation for sensation-seeking behaviours like having sex. This happens through the flow of dopamine molecules, which act as a message transmitter in your brain. For a brain cell to receive a message conveyed through dopamine, it needs a dopamine receptor like the D4. The gene that encodes this receptor (DRD4) comes in two forms: one that binds dopamine tightly and one that binds dopamine not as tightly. If you have the version of the gene that encodes the receptor which doesn’t bind dopamine tightly, you need more dopamine to achieve the same end-result in your brain (the feeling of reward), hence the string of one-night stands.

So are cheating and one-night stands excused because our genes made us do it? At the risk of becoming unpopular, I have to say the answer to that is no. The relationship between the special version of the DRD4 gene and promiscuity is not deterministic: having the gene doesn’t automatically lead to one-night stands. Many people in the study had the gene and didn’t cheat. The gene-sexual behaviour relationship is what we call probabilistic: having the gene only increases the probability that you would exhibit a given behaviour. What’s more, our environment can change how different genes are expressed, and it is possible to modify our behaviour. No excuses!

There are also two caveats to note in this study. The first is that not all the results were statistically significant. For example, 50% of people with the special version of the gene reported being unfaithful, compared with 22% for the participants with the normal version. While this may seem like a big difference, it was not significant because they are not looking at a big enough sample of people to ensure this couldn’t happen just by chance. The second problem is that the relationship between the gene and sexual behaviour could be due to a confounder, which is a variable that has not been studied that could explain the results. For example, if having the special gene makes you more honest about your sexual history, then these results would be due to a truth-telling tendency, not a sleeping around tendency.

My favorite part of the entire research article is at the very end when the researchers write:

“…we emphasize that it would be prudent to avoid premature and facile characterizations of the DRD4 VNTR polymorphism as “the promiscuity gene” or “the cheating gene.”

Sunday, November 28, 2010

I consider myself a fairly healthy person and I rarely get sick. However, there is one activity that never fails to put me under the weather: flying. No matter how hard I try, no matter how much I wash my hands and try not to touch my face, any flight inevitably leads to some kind of illness. It usually ends up being a common cold, but I remember a nasty Christmas holiday spent in bed with a stomach flu. In any case, my recent flight home from San Diego was no exception, and here I am, still battling a stupid cold. So naturally, I looked for an article on how to prevent colds.

In a recent study, researchers followed over a thousand adults (18-85 years old) for 12 weeks during the fall and winter seasons. Over this time, the participants had to report two measures: any symptoms of upper respiratory tract infection (such as a cold), and how much they exercised.

While running in the cold winter air might sound like a counterproductive measure to prevent colds, the researchers found that participants who reported being physically active (aerobic exercise) five days a week or more experienced significantly less cold and flu symptoms (a 43% reduction in number of days with an illness). This relationship held true event when several factors were controlled for, such as dietary habits (eating lots of fruits and veggies) and stress levels.

Why might exercise prevent colds? While we don't have a clear cut answer to this question, animal studies suggest a few leads. When you exercise, you increase the circulation of cells that are important for immunity and that are involved in fighting off the bad guys. More specifically, exercise has been shown to boost macrophages (cells that eat up invaders) in your lungs. In addition, exercise can lower the levels of immunity-compromising stress hormones.

Is there anything exercise can't do? Now I need researchers to study how one can be motivated to exercise when they are sitting in a comfy chair by the fire with a mug of chai tea and a pile of work to do and it's below zero outside. Tell me something I don't know, right?

Thursday, November 18, 2010

I didn't have a chance to write a decent post this week because I was away in San Diego for the most glorious scientific event I know: the Annual Meeting of the Society for Neuroscience (SfN). I will resume regular programming shortly, but in the meantime, here are some highlights from the conference, in no particular order.

- The conference attracts over 34,000 neuroscientists and people wanting to sell stuff to neuroscientists. There were about 16,000 poster presentations. The event lasts 5 days and at any given time there can be a dozen talks going on. It's sometimes very hard to chose what to see. The Geek Meter registers very high.

- The opening session was a presentation by Glenn Close. She talked about her advocacy group for mental illness, Bring Change 2 Mind. She did a great job and I was moved. On a side note, she does NOT look 63.

- One of the highlights of SfN for many graduate students is the incredible amount of swag one can collect simply by feigning interest in a variety of products. The floor space for vendors is the size of a small city. This year I didn't have much time to go through it all, but I still managed to come home with two T-shirts, a mini laptop mouse, a notebook with a depiction of the Wnt pathway on the cover, and several pens.

- I would say that the two major themes this year were sensory (vision, olfaction, etc.) and Alzheimer's disease, even though there is definitely something there for everyone. There was also a big focus on optogenetics. My personal opinion is that optogenetics will revolutionize the field of medicine.

- Celebrity scientists are called scilebrities and can be spotted everywhere. The conference also organizes a number of socials for every field of neuroscience where you can narrow down your schmoozing to the scilebrities that work in your area of interest. You can usually judge how well a field is doing by the quality of the catering. It's a bit of a running gag.

- San Diego can pretty much be summarized in three words: fish tacos and tequila.

- As surprising as it may sound, neuroscientists know how to party. You're just going to have to trust me on this one.

I already can't wait for next year. See you soon for a post on how to keep colds at bay.

Sunday, November 7, 2010

I have fond memories of high school math classes. Numbers came easy for me, and I derived a lot of satisfaction from solving problems (and even more so when I solved them fast!). I was lucky to have excellent teachers, especially in grade 11 and in CEGEP, who turned math into something of a game, a code I needed to crack. However, I’m well aware that math class was not a party for everyone. About 15 to 20% of the population struggle with some form of difficulty in learning or understanding mathematics. Obviously, this can be an obstacle to success in school, and in employment. To address this issue, a team of researchers from the UK set out to test whether brain stimulation could improve someone’s math abilities.

The researchers used a technique called transcranial direct current stimulation (TDCS, similar to the method used in this post on morality). TDCS consists of applying a weak current to a brain region (in this case, the parietal lobe, a region important for learning and understanding of math) over a given time period (in this case, 20 minutes). The technique is non-invasive, meaning they don’t open up your scalp to get at your brain: electrodes are simply place on your head (volunteers are much easier to recruit when the electrodes are on the outside, not the inside). Depending on the type of current that the researchers apply, TDCS can increase or reduce the excitation of the brain cells in the targeted region.

To test the impact of this kind of brain stimulation on math capabilities, the researchers delivered the stimulation while the participants (15 healthy adults) were learning the relative values between nine arbitrary symbols (for example, square is bigger than triangle). The learning session lasted 90 to 120 minutes. The participants received either the brain stimulation during the first 20 minutes of the session (the experimental group), or only during the first 30 seconds of the session (the control group, as 30 seconds of the stimulation is not long enough to see any effects, but still gives you the “tingles” associated with the protocol). After this learning phase, the researchers assessed the participants’ newly created sense of numerical value for the symbols with two different math tasks using the symbols. This whole process was then repeated over six days.

The results show that brain stimulation leads to better and more consistent performance on both math tasks. Mathematical ineptitude is cured! To make the matters even more interesting, the researchers called the participants back six months later and re-tested them (no brain stimulation this time). And six months later, the brain stimulation group still performed better at the math tasks involving the fake digits.

While this may sound great, don’t start tazing your brain just yet. It’s worthwhile to note that on the last day of the initial six-day study, the researchers had the participants perform the same two math tasks, but with normal numbers. In this case, there were no difference between the experimental group and the control group. This means that the brain stimulation paradigm only worked for the specific task that was learned during the stimulation, and didn’t extend to math in general. Nonetheless, the researchers suggest that brain stimulation may be a tool for intervention for those who have “developmental and acquired disorders in numerical cognition” (read: for people who are bad at math).

Must we all be good at math? There’s a French saying that goes: "ça prend toute sorte de monde pour faire un monde" (roughly translates into: "it takes all sorts of people to make a world"). What’s your take on this? Do you think this is a great advance? Do you have any concerns? Let’s hear it!Reference: Modulating neuronal activity produces specific and long-lasting changes in numerical competence. (2010) Cohen Kadosh R et al. Current Biology 20:1-5.

Sunday, October 31, 2010

When I picked my paper for this week’s blog, a very recent article published in PNAS, I didn’t factor in that I would write it on Halloween. Now I realize it’s going to seem like some cruel joke: on the one night where people stay up late walking the streets with flashlights and eat candies, I’m writing about the link between light at night and obesity. Wow. If I had tried to pick something more fitting, I couldn’t have done a better job.

We all know obesity is on the rise and there are several reasons to explain the epidemic: increased intake of calories (Double Down sandwich, anyone?), dietary choices (cheezy poofs instead of apples, anyone?) and lack of exercise (reading blogs, anyone?). However, the rise of obesity rates also coincides with an increase in light at night – the artificial lighting that allows us to write blogs late at night and catch up on all other activities we didn’t have time to do during the daytime. The problem is that light is closely tied to our circadian rhythm (the built-in clock that controls our biological processes and our behaviour). When our circadian rhythm is disrupted (an extreme example of this is shift workers), so is our metabolism. Based on this logic, an international team of researchers set out to test whether light at night plays a role in the weight of mice.

The researchers divided their mice into three groups. The control group was housed in the standard light/dark cycle. Another group of mice was housed in a light/dim light cycle (let’s call them the “dim” group). Finally, a third group of mice was housed in a continuously lit room (let’s call them the “bright” group). The mice were housed in these conditions for eight weeks, and during this time, the researchers monitored a number of parameters including body mass, food intake, activity levels, and glucose tolerance (how quickly sugar is cleared from the blood).

The researchers found that all the mice that experienced light at night (both the dim group and the bright group) got significantly fatter than the control mice. What’s more, the dim group and the bright group also exhibited impaired glucose tolerance (this can mean the mice are in a prediabetic-like state). Did the light at night groups of mice eat more (who doesn’t get the munchies when watching a late-night movie)? No. Did they exercise less (who goes for a run at midnight)? Also no. So what happened?

As it turns out, while the two light at night groups of mice ate just as much as the control mice, they ate at different times. Mice are nocturnal animals, and so normally they do most of their eating at night. The mice in the dim group, however, ended up eating over half of their food during the “light” phase. When the mice in the dim group were forced to eat their normal food intake only during the normal (dark) time, they didn’t gain weight. How crazy is that? These results suggest that light at night disrupts the timing of food intake, and this throws the metabolism out of whack.

Anyone who has looked up weight loss tips knows that it’s a good habit to forgo eating past a certain time of night (usually 7 or 8pm) if you want to lose weight. The reason usually given to explain this is that night time food is most often unhealthy and calorie-laden snacks: munchies during a movie, or ice cream after a distressing phone call from the ex-boyfriend. This study suggests there might be something more to this weight loss strategy: it may be all about listening to our biological clock.

Sunday, October 24, 2010

Last week, I wrote about how walking can protect your brain against cognitive decline. Sounds easy, right? The truth is, there are several factors that impact brain health, and exercise is just one of them. This week, I'm going way back in time to 2007 to look at a different way to keep your brain healthy: apprendre une autre langue.

Canadian researchers were interested in the relationship between bilingualism and Alzheimer's disease. Their hypothesis was based on the concept of cognitive reserve, a term that represents the attributes of your brain that make it resistant to damage. For example, you might have heard that keeping your mind challenged by doing crossword puzzles can delay cognitive decline. This is one way to increase your cognitive reserve. Presumably, complex mental activity like crossword puzzles but also like speaking more than one language can lead to a lesser chance of developing dementia and, in the event where you do get dementia, a slower rate of decline.

The researchers sifted through the records of over 200 patients from a memory clinic. About half of their sample spoke one language and half spoke two languages. The researchers assessed the relationship between the number of languages each patient spoke and whether each patient had Alzheimer's disease. For the patients who did have Alzheimer's disease, the researchers noted how old the person was when the disease started. Of course, the researchers controlled their results for all the obvious potential biases, such as cultural differences, immigration, formal education and employment status.

The results of this analysis show that the bilingual patients developed Alzheimer's disease much later (4.1 years on average) than the monolingual patients. Since developing an age-associated disease like Alzheimer's later means you have a greater chance of dying before you get the disease, delaying the onset by 4 years means a reduction in the total cases of Alzheimer's disease. Currently, no drugs have an effect that's comparable to bilingualism.

Since no study is ever perfect, let me point out two small caveats before you fish out your old high school Spanish books. First, there is one thing the researchers could not control for, and that is whether cultural differences could lead to delays in seeking medical help for a condition. This could muck up the results because if some patients delayed their first medical visit, then the age at which they received the diagnosis for Alzheimer's disease could be skewed. Second, the protective effect of speaking two languages cannot be generalized to people who have some knowledge of another language but are not fully bilingual. In this study, the patients who were bilingual were true bilinguals, fluent in both languages and having used both languages regularly for most of their lives.

Still, it's a good reminder that a busy mind is a healthy mind. And it's nice to have evidence to justify the occasional weekend in Paris.

Reference: Bilingualism as a protection against the onset of symptoms of dementia. (2007) Bialystok E et al. Neuropsychologia 45:459-464.

Sunday, October 17, 2010

It's no secret that exercising is key to maintaining a healthy brain as we get older. We hear it all the time. So why isn't everybody exercising? After all, it represents a form of personal health insurance, and it's way cheaper than Sun Life. The truth is, even though many people are aware that exercising is good for them, they are not compelled to change their lifestyle because it's not exactly clear what kind of exercise is best, how long you need to do it, and what exactly it does to help your brain. Well, I'm going to tell you.

In a recent study published in the journal Neurology, American researchers looked at 299 adults with a mean age of 78 years old. They evaluated how active each person was by measuring how many blocks they walked over the period of one week (this ranged between zero and 300!). The researchers then waited nine years (!), and then took brain images for all the participants and evaluated their level of cognitive impairment.

Not surprisingly, the more someone walked, the greater their brain volume after nine years. Greater amounts of physical activity predicted bigger volumes for several brain regions associated with thinking and memory, such as the hippocampus and the frontal cortex. What's more, the bigger brains associated with physical activity cut the risk for cognitive impairment in half.

The magic number in this study is 72. Walking a minimum of 72 blocks per week was necessary to see the bigger brain effect. Walking more than 72 blocks didn't lead to an even bigger brain. While I wouldn't necessarily shoot for walking only and exactly 72 blocks per week (as physical activity is also associated with a decreased risk for some illnesses), it's nice to have a baseline number, and to know that an exercise as simple as walking can make a difference.

Sunday, October 10, 2010

Almost one year ago, I wrote a post on optogenetics, a new field that combines optical techniques (playing with light) and genetic techniques (playing with DNA) to study the brain. Optogenetics is an extremely powerful technique that can be used to control the activity of brain cells. So far, it’s been mostly researched as an experimental tool, but a recent study published in the journal Nature hints at the possibility of using this technique to learn how to treat the most common neurodegenerative disorder after Alzheimer’s disease: Parkinson’s.

Parkinson’s disease, a movement disorder, affects a part of the brain called the basal ganglia, which is critical for planning movement and selecting appropriate actions. The basal ganglia can be roughly divided into two pathways (or networks of brain cells): a “direct” pathway that facilitates (or enables) movement and an “indirect” pathway that inhibits (or prevents) movement. When someone has Parkinson’s disease, it is thought that their direct pathway is not active enough and that their indirect pathway is too active, and this leads to the muscle rigidity, tremors and slowing of physical movement.

In this study, the researchers used a virus to deliver a special channel to the brain cells of either the direct or the indirect pathway of the basal ganglia in mice. This may sound confusing at first, but it’s a really clever experiment. Here is how it works: when certain types of viruses infect cells, they incorporate their DNA into the DNA of the “host” cell, such that the host starts making virus DNA, and ultimately turns into a virus-making factory. The researchers essentially hijacked this process: they engineered a virus that contained the DNA for the special channel, and therefore, once the brain cells got infected, they started making the special channel. What’s so special about this channel? It is activated by light (hence the “opto” in “optogenetics”). When blue light hits this channel, it activates the brain cells.

To tease out the differences between the direct and the indirect pathways, the researchers divided their mice into three groups: a control group (no brain cells infected with the virus), a “direct” group (the virus targets the direct pathway, such that only cells in the direct pathway have the special channel), and an “indirect” group (the virus targets the indirect pathway, such that only cells in the indirect pathway have the special channel). By exploiting this technique, the researchers were able to activate either the direct pathway or the indirect pathway of the basal ganglia simply by shining blue light onto the brain of the mice.

(I realize this is all very complicated, but if you’re still with me at this point, congratulations on completing Optogenetics 101!)

And now for the results… *drumroll* As expected, when the direct pathway was activated, the mice moved more (they ran around more, stood up on their hind legs more, etc.). And when the indirect pathway was activated, the mice froze, and overall moved less. How’s this for mind control?

At this point, it’s easy to get carried away and imagine a plethora of crazy scenarios should this technology fall in the hands of the bad guys (“And now, you will dance for me! Gnahahaha!”) However, the researchers had good intentions. They went on to activate the direct pathway in a mouse model of Parkinson’s disease, and found that this procedure rescued the locomotion deficits of the mice. And that is a wicked finding.

Unfortunately, treating humans with optogenetics is not going to happen anytime soon. There are significant hurdles to overcome before we can even think about it: working with viruses in the brain, delivering the channels only where we want them, assessing unwanted effects, and so on. That said, this study elegantly confirms that somehow activating the basal ganglia’s direct pathway could be an important therapeutic target to treat Parkinson’s disease.

Sunday, October 3, 2010

With all the discussions around climate change, it's no surprise that researchers are increasingly studying the impact of human activity on the environment. This type of research can take many forms ranging from the effects of city lights on the migratory paths of birds to the composition of soil in traditional and organic farming. Beyond research, climate change and other environmental discussions have also changed how we (or at least some of us) act at home. We try to remember to shut the lights. We take shorter showers. We recycle. But there's one sneaky way in which most of us impact the environment, sometimes on a daily basis, without ever thinking about it: when we swallow pills.

Most pharmaceutical drugs, from regular pain killers to chemotherapy drugs, are not fully processed by the body. This means that we end up excreting a proportion of the drugs we swallow, or by-products of those drugs. This leftover pharmaceutical trash ends up in sewage, and eventually makes its way into our aquatic systems.

A recent study by Canadian researchers looked at how low and high concentrations of Prozac, the popular antidepressant, affects the reproductive systems of the goldfish. The news are not good. The researchers found that even low concentrations of Prozac, similar to what could exist in the environment, significantly decreased the volume of sperm the fish produced when they were sexually stimulated. At the rate we're going, we're going to need to feed our fish Viagra to compensate for their antidepressant load.

I wrote a story on this study for The Mark, a Canadian online forum of news, commentaries and debate. You can read the full article here.

Sunday, September 26, 2010

The traveling exhibit Body Worlds & the Brain has arrived in Vancouver. For those of you who are not familiar with Body Worlds, it is an exhibit that features real human bodies and human body parts that have been preserved through a process called plastination. Through my new line of work at the National Core for Neuroethics, I've become involved with Science World, the organization hosting the exhibit in Vancouver. And because of this involvement, I have now seen the exhibit for the first time. Here are my thoughts about Body Worlds, and I would be most interested to hear yours, whether you've seen it or not.

Before seeing the exhibit, I wasn't very warm at the idea. I had read a lot of ethics articles questioning whether the exhibit preserved human dignity, and whether human bodies could be considered art. The thought of those preserved bodies, with their eyes staring at me, definitely gave me the creeps. I was concerned with issues like consent, and I was uneasy with the fact that the whole thing felt like a freak show, a very profitable freak show.

I was lucky to see the exhibit on a special opening night for volunteers. On the plus side, it wasn't very busy, and I got to scrutinize everything. On the down side, this meant I had to listen to endless speeches before entering the exhibit.

It's during one of these speeches that my gut feeling about Body Worlds completely turned around (keep in mind, by then I still haven't seen any part of the actual exhibit). One of the speakers discussed how powerful it is to witness the complexity and fragility of the human body, and how this can lead to profound changes in how we view ourselves, and how we take care of ourselves. The speaker was very convincing. Seeing as I'm concerned with caring for my body and tremendously interested in science communication, I started thinking that perhaps I was wrong, and perhaps the "good" of the exhibit (teaching people about the fragility of their bodies) far outweighed the "bad" (ethical questions about dignity and such).

Then I finally got to enter the exhibit and decide for myself, and a really funny thing happened: nothing. I didn't feel any strong negative emotions, didn't think it was gross, inappropriate, or disturbing. But I also didn't feel any strong positive emotions either: didn't think it was cool, beautiful, or awe-inspiring. I mostly didn't care, and frankly I was quite bored by the end of it all.

I'm not quite sure what to make of this as I was very much expecting to feel something. Part of the reason for my lack of interest could have been that the shock factor was lost on me. After several years of dissecting rodents, I've seen my fair share of guts and brains, albeit on a smaller scale. Perhaps I had over-thought the whole thing too much prior to seeing it. I'm not sure.

I would love to read your thoughts about this. Have you heard of Body Worlds? Were you motivated to go see it? What was your gut instinct if you did see it? What did you learn? If you chose not to see it, why?Share in the comments!

Sunday, September 19, 2010

We hear a whole lot about new brain imaging techniques lately. It seems like imaging studies are constantly revealing new pieces of information about the brain: what part of the brain is responsible for our morality, what happens when you fall in love, and so on. One of the main techniques used in these studies is called functional magnetic resonance imaging (fMRI). Unlike a regular MRI, which takes a static image, fMRI can give us images of a dynamic process: the flow of oxygenated blood in the brain. Presumably, when one region of your brain is activated, the brain cells require more oxygen, so more oxygenated blood flows to that region, and this can be seen and measured using fMRI.

While it may be very interesting to find out what happens to your brain when you fall in love, we have yet to see real clinical benefits from these fancy new brain scans. Brain disorders and diseases, such as depression and Alzheimer’s disease, still cannot be diagnosed using fMRI. One of the reasons for this limitation is that an fMRI scan for a single person really doesn’t tell us much: we can only gain insights from this technique if we look at groups of people, and compare averages. To this day, this problem has really limited the potential of fMRI for diagnosing brain diseases. However, a recent publication in the journal Science suggests that fMRI may soon be clinically relevant.

The researchers were interested in finding an application where a single brain scan could provide information about the individual. They chose to assess the maturity of the brain, and used chronological age as a reference measure. Instead of using regular fMRI, the researchers used an even fancier version, fcMRI (fc stands for “functional connectivity”). This type of imaging measures the spontaneous activity between brain regions. How strongly different brain regions interact with each other is thought to be shaped by all the experiences one accumulates over time, hence the potential to determine maturity from these kinds of measures.

Participants aged seven to 30 years old were asked to undergo a five-minute brain scan. What followed was an extremely complex series of models and algorithms developed by the researchers to establish a “maturation curve”, from which they could then predict the maturity of a given brain based on where its scan fits along the curve.

Ultimately, it was established that yes, a single scan can provide information about the person’s brain: it can predict its maturity level. But couldn’t we already determine brain age just by looking at its shape? For the most part, yes. The reason this article is relevant is because quite a few brain diseases and disorders don’t have a signature shape (unlike tumors, for example, which can sometimes be spotted on a static image). Therefore, having a tool that allows us to assess brain function without having to compare large groups could become very valuable in the diagnosis of some brain disorders (provided we first determine the functional signature of these disorders).

As an interesting side note, the researchers' results suggest that on average, functional brain maturity levels out at about age 22. This obviously represents a physiological maturity level, not a cognitive maturity level, thank goodness. When I was 22, I used to think I knew everything. How I've "matured" since then!Reference: Prediction of individual brain maturity using fMRI. Dosenbach N.U.F. et al. Science 329:1358-61 (2010).

Wednesday, September 8, 2010

When you walk into any baby store, it quickly becomes obvious that boys like blue trucks and girls like pink dolls. You might find a yellow pajama with a puppy on it, but that one is probably intended for the pregnant mom who doesn’t want to find out the sex of her baby but wants to buy a pajama. Many researchers have speculated on why boys and girls like different colors and different toys. There are three main theories out there. The first one, the “social learning” theory, suggests that children like certain toys and colors because they are socialized to like them: they like the toys their parents buy for them. The second theory, the “cognitive theory”, suggests that a child knows what gender he or she is, and is aware of the stereotypes, so he or she chooses accordingly. Finally, the “hormonal theory” suggests that sex differences in the prenatal hormone environment changes how the brain organizes itself and leads to female- or male-typed behaviours. For example, high levels of androgen (the male hormone) lead to brain masculinization and the choosing of trucks over dolls. While this may sound crazy, experiments have shown that female fetuses exposed to abnormally high androgen concentrations spend more time playing with masculine toys compared to regular girls. To make things even more complicated, studies have shown that some kinds of monkeys also show sex-specific toy preferences (and Daddy Monkey doesn’t shop at Wal-Mart, so there goes the “social learning” theory). So, is it already in your brain when you’re born, or do you learn to love blue trucks or pink dolls?

Researchers set out to shed some light on this question by studying 120 boys and girls aged 12 to 24 months. The task was very simple: the child was shown two images simultaneously (for example, a red car and a red doll, or a blue car and a pink car), and a camera recorded how long the child looked at each image, which is a measure of interest.

The researchers found that boys preferred cars and girls preferred dolls. No big surprise there. Unfortunately, because children 12 months old or older have already been provided with sex-typed toys, their looking preference may reflect the types of toys they have at home and the researchers could not draw any conclusions on whether this behavior was learned or innate.

The interesting finding lies in the colors: as it turns out, the children cared very little about the color of the images. Boys preferred the cars, regardless of whether they were pink or blue, and conversely, girls preferred the dolls, regardless of their color. In fact, the researchers found that as a whole, everybody liked red the most. This finding indicates that the stereotypical color preferences seen in older children are most likely learned behaviours.

As I’ve mentioned before, my favorite kind of research article is the one that leaves me with more questions than I started with, and this is one of them. Why was pink adopted as the “girl” color if girls aren’t naturally drawn to it? At what stage does the shift occur from not caring about colors to caring about them? Is this shift really purely socially driven? In any case, the next pajama I buy for a child will be red.

Thursday, August 19, 2010

In the animal kingdom, it's well established that byinteracting with some individuals and avoiding others, you can influence your experience with natural selection (read: your chance of mating with a hot stud/chick). I think this paradigm is especially obvious in the high school setting: hanging out with the footballers and the cheerleaders increases your odds of mating (or at least, attempting to mate), while hanging out with the geek squad (ah, the good old days) definitely decreases your chance of mating (Glee, anyone?).

A while back, I wrote about dating lessons we can learn from monkeys. Today, I'll share with you the results of a recent study that highlights a dating lesson we can learn... from birds.American researchers set out to analyze the social networks of a species of wild finches to study the relationship between how pretty they are (ornament elaboration), how social they are (social lability), and how successful they are at mating. So they captured and banded a whole bunch of finches, and tracked them year-round.

The researchers found that less elaborate males (the "ugly" ones) shifted social groups more often than the prettier males. When it came to finding a mate, this party-hopping behavior somewhat compensated for their ugliness: the highly social birds were more successful at finding a mate when compared with equally ugly but less social birds.

There's an important lesson here: to increase your chance of mating, it might be a good idea to vary who you hang out with. I'm sure Dear Abby would approve.Reference: Structure of social networks in a passerine bird: consequences for sexual selection and the evolution of mating strategies. (2010) Oh and Badyaev, Am Nat 176(3):E80-9.

Sunday, August 8, 2010

Health information can sometimes be a real puzzle*. For example, your doctor may recommend that you take a calcium supplement, since calcium is important for strong bones. Then, the next day, you may read in the news that calcium supplements will give you a heart attack, which is exactly what happened to my mom last week. What should you do? Like most medical interventions, it’s all about risks and benefits. While a new study (you may have already heard of) highlights a risk of calcium supplements, don’t throw away the bottle just yet.

The researchers carried out a meta-analysis: a study of studies. Essentially, they searched for previous studies of calcium supplementation (compared with a placebo) and compiled them together to try to tease out effects that each single study may not have detected. Overall, the researchers ended up analyzing 11 studies between 1990 and 2007, for a total of 12,000 participants. In all the studies, 143 people who were taking calcium supplements had a myocardial infarction (a heart attack), compared with 111 people who were taking the placebo. This represents an increase in the risk of myocardial infarction of 31% for those taking calcium supplements. Interestingly, calcium supplements were only associated with an increased risk of myocardial infarction in people who already had a big calcium intake through their diet (more than 805 mg/day).

This study will no doubt shake things up in the fields of cardiovascular health and osteoporosis prevention. However, there is one important caveat with this analysis: the researchers did not look at studies where the supplement was a combination of calcium and vitamin D. Therefore, one cannot assume that calcium/vitamin D supplements would lead to the same risks. In fact, another recent study in women reported that calcium and vitamin D administered together had no effect on the risk of heart disease. It’s also important to remember that when weighing risks and benefits, calcium (and vitamin D) does a lot more than just strengthen your bones: it has also been shown to play a role in the prevention of certain cancers.

One thing is for sure: dietary calcium intake is safe. So go ahead and enjoy the moo juice.__________________________*You’ll be pleased to hear that I am dedicating my postdoctoral training to solving the puzzle.

Monday, August 2, 2010

I've been busy. I would like to say that I've been enjoying the wonderful Vancouver summer, but in academia, summer rhymes with grant writing. This means that I spend most of my days writing up long-winded research proposals that describe the exciting science I'd carry out if only (insert name of funding agency) would give me the (insert large amounts of money) I need. Of course, each funding agency (usually charitable organizations or government organizations) has slightly different requirements. One wants an 11-page proposal in New Times Roman font in size 12 with the references as part of the proposal. The other wants an 14-page proposal in Arial font size 11 with the references in an appendix. So on and so forth. So yeah, buckets of fun.

Interestingly, trying to convince others that my brilliant ideas should be funded also makes me wonder how other types of research get funded. Since this is the summer and I'm sure you'd prefer some light reading, I thought I'd share a little gem of an article on a topic of utmost importance that really illustrates my point about funding: the best possible way to... Pour champagne.

French researchers (who else?) looked closely at two different ways of pouring champagne into a champagne glass (a flute): (1) the traditional way, which consists of letting champagne fall vertically and hit the bottom of the flute, thus generating a thick head of foam, and (2) the "beer-like" way, which consists of pouring the champagne on an inclined flute wall, which generates less foam. The researchers analyzed a number of parameters such as the concentration of dissolved carbon dioxide and the temperature of the champagne. As it turns out, serving champagne chilled (4-12 degrees Celsius) in the beer-like way minimizes the loss of dissolved carbon dioxide, a parameter of utmost importance since it impacts various aspects of the champagne-tasting experience. The researchers stress the value of their research and call for revisiting the traditional way of serving champagne, especially when champagnes are to be compared in competitions.

Wow. Seriously, who funds this? And most importantly, why is it that some researchers have all the fun? The fine print tells us that the researchers "thank Champagne Pommery for regularly supplying (them) with various champagne samples". I think I missed my calling.Reference: On the losses of dissolved CO2 during champagne serving. (2010) Liger-Belair et al. Journal of Agricultural and Food Chemistry. [Epub ahead of print].

Monday, July 19, 2010

In my last post I wrote about how children think junk food is tastier when there’s a cartoon on the package. As adults, we may be wiser to such blatant marketing schemes, but we still love our junk food, Shrek or no Shrek. Take pop for example: as of 2006, the production value for carbonated soft drinks in Canada was $2 billion. We also like our donuts: one of the biggest sources of added sugars in our diet is bakery goods. While it’s sometimes nearly impossible to resist the enticing aroma of fresh cinnamon buns baking at the coffee shop, think twice before you splurge: a new study suggests that added sugars in the form of fructose is positively linked to high blood pressure.

High blood pressure, or hypertension, affects over 5 million Canadians, and is increasingly affecting teenagers. It’s a direct risk factor for many nasty conditions, like heart failure and stroke. Interestingly, the increase in the prevalence of hypertension mirrors the increase in our consumption of fructose. And while technically, fructose is a type of sugar found in fruits, it is not thought that the increase in fructose consumption is due to eating more apples. The culprits are sweetened drinks, processed foods and those deadly cinnamon buns.

To address this question directly, American researchers analyzed the data from the very, very large (>4500 participants) National Health and Nutrition Examination Survey. Not surprisingly, the more fructose you consume, the higher your risk of hypertension. This fructose intake/high blood pressure relationship holds even when you control for a number of other factors, including demographics, physical activity, other diseases, calorie intake, alcohol intake, salt intake, and others.

This study highlights what we call an “independent association”, which is not to be confused with a cause-and-effect relationship. There is not enough data to say that eating a lot of fructose leads to hypertension, only that those two things seem to occur together. However, the study has some strong features, namely that it is looking at a very large sample of people, and that it controls for many possibly confounding factors (an important one being salt intake, as it can lead to hypertension).

On the plus side, it’s very easy to limit your fructose consumption by decreasing the amount of pop you drink and the amount of processed foods you eat. On the downside, cinnamon buns are oh-so-very-tasty...

Monday, July 5, 2010

In this age of pre-prepared processed meals and endless hours on Facebook, it’s no wonder kids are getting fatter. In the US, obesity rates have doubled for preschoolers (2-5 years old) and more than tripled for children 6-11 years old. To explain this alarming obesity trend, many blame the accessibility and affordability of fast food. As a graduate student I often relied on cheap take-out to sustain myself. Luckily I quickly discovered that in Vancouver, sushi costs less than a McDonald’s meal, offering an interesting alternative. If my rent didn’t force me to live below the poverty line (hey, this PhD’s got to be worth something, right?), I would have thought this was heaven. In any case, I’m digressing. Cheap fast food is one part of the equation, kids drooling, lifeless, in front of the computer and the television is probably another part. Interestingly, a recent study suggests that another contributor to the obesity crisis is no other than… Scooby-Doo. And Dora. And Shrek.

The researchers were interested in finding out if putting the image of a popular character on the packaging of a product (this marketing ploy is called “character licensing”) is an effective way to sell food to kids. To test this, the researchers studied three foods: graham crackers, gummy bears and baby carrots. The participants in the study, children aged 4 to 6 years old, were presented with two packages of the same food item (for example, graham crackers). The only difference was that one of the packages had a sticker of a cartoon character (Scooby-Doo, Dora or Shrek) on it. The kids were then asked to say if one of the two foods tasted better, and if so, which one. They were also asked which food they would prefer to have for a snack.

So, does it work? Are children that oblivious to this obvious and dubious marketing trick (Scientific Chick challenge: Write a sentence with more than 3 words ending in -ious)? Absolutely. Overall, children perceived the food items with the cartoon on them to taste better than the ones in the plain packaging. This finding was statistically significant for the “junk” food (the crackers and the gummy bears). Not surprisingly, the children also indicated they would prefer the snacks with the characters on the packaging. As it turns out, character licensing is especially effective in children because they lack the ability to understand that the advertisement is meant to be persuasive.

You would think that all you would have to do to solve the obesity crisis is to slap Elmo’s face on broccoli and apples, but the fact that the character licensing experiment didn’t work as well with the carrots suggests this wouldn’t necessarily do the trick. The researchers only studied 40 children, a relatively small sample size to draw out any solid conclusions, but it’s still an interesting finding. I find it a little worrying that cartoon characters can lead to a more positive perception of the taste of junk food. I find it very worrying that food and beverage companies spend more than $1.6 billion per year on advertising for kids. I guess Ramen advertises for grad students and nobody gets worked up about that.

Monday, June 28, 2010

At the risk of sounding like a broken record, exercise is good for you and your brain. That being said, most studies looking at exercise and cognitive function evaluate aerobic exercise (the kind that gets your heartbeat going). The other kind of exercise, resistance training (strength training with weights), has not been of much interest, perhaps due to the old stereotype that has been plaguing bodybuilders forever: big biceps, small IQ (although big biceps never stopped anyone from becoming governor of California). Switch the young lads for older women, though, and a recent study from a team of researchers at the University of British Columbia (represent!) suggests that gaining muscle can translate into a better brain.

The study looked at 135 women between the ages of 65 and 75 over the course of a year. The women were assigned one of three groups: group one took a one-hour resistance training class once a week, group two took a one-hour resistance training class twice a week, and group three, the control group, took a one-hour balance and stretching class twice a week. The women were all evaluated for a range of cognitive functions at the start of the study, at the six-month point, and at the end of the study (at the 12-month point).

The bad news is that strength training for six months, whether once or twice a week, didn’t lead to any changes. The good news is that if you stick to it for a year, you only need to train once a week to see an effect. After 12 months, the researchers found that all the women who underwent strength training showed a significant improvement in attention. The researchers evaluated attention using the well-established Stroop test (see image below), where the names of colors are written in an ink of a different color (for example, the word blue is written in red ink). To assess attention, the participants were asked to name the color of the ink (and not the word) as fast as they could (try it!). The once-a-week and the twice-a-week resistance training groups significantly improved on this task, while the performance of the balance and stretching group slightly deteriorated.

This improvement in cognitive function didn’t come without a price. The women in the once-a-week resistance training group complained of joint and muscle pains more than the women in the two other groups. It seems that the sweet spot for both an improvement in cognition and a lower risk of pain is to train twice a week (at least). This makes sense to me: the more frequently I exercise, the more my body gets used to the motions. It is also worth noting that the researchers tested other cognitive tasks such as memory and these didn’t show any change with resistance training.

Overall, though, I think this study is great news. I know that many older adults shy away from rigorous aerobic exercise (even young adults… *cough cough*), so this could be an easier alternative to help with brain health. And even if the “brain benefits” of resistance training could be a little more impressive (like by curing Alzheimer’s disease, while we’re at it), on the plus side, strength training also improves gait speed (your natural walking speed), and an improved gait speed is associated with a significant reduction in mortality. So if you don’t exercise for your brain, do it for your lifespan.Reference: Resistance training and executive function: A 12-month randomized control trial. (2010) Liu-Ambrose T. et al. Arch Intern Med 170(2):170-8.

Tuesday, June 15, 2010

Scarecrow, from the Wizard of Oz, desperately wanted a brain. Given the financial success of the brain training industry, it seems he’s not the only one hoping for cognitive enhancement. “Brain training” refers to the improvement of cognitive function by the regular use of computer exercises. Recently, it’s popped up everywhere, targeting kids through video games like “Big Brain Academy” to older adults through iPhone apps like “Lumosity”. While brain training companies are stuffing their pockets, the question remains: does it work?

A team of researchers from the UK set out to test how well brain training works. They teamed up with a popular science show on television and recruited over 11,000 healthy participants. The participants completed a general initial assessment of cognitive function (the “benchmarking” assessment), then started a regimen of 10-minute training session three times a week for six weeks. The online training sessions tested a broad range of cognitive functions: short-term memory, attention, math skills, and so on. These tests were designed to be similar to those found in commercial brain training programs. The researchers followed the progress of the participants over the six weeks of training and concluded the study with a final general benchmarking test similar to the initial one.

The good news is that the researchers saw a significant improvement on the specific tasks the participants trained on. The bad news is that this improvement did not extend to general cognitive function. These results mean that while you can improve at, say, a specific memory game that involves remembering the items in a scene, this won’t necessarily translate to better memory in your everyday life (where did I put my keys again?).

As can be expected, the study was criticized, especially by individuals with a commercial interest in brain training. Some suggested that the participants didn’t train long enough or often enough to see an improvement in general cognition. Others said that it’s not because these researchers didn’t observe an improvement that it’s impossible to achieve cognitive enhancement through computer games. However, it’s important to keep in mind that the study also scores some good points: the researchers looked at a very, very large number of participants, and the games used for brain training mimicked those that are commercially available.

Personally, the last thing I need is a reason to spend more time in front of the computer, and I like to think that fresh air is a terrific cognitive enhancer. What do you use to maximize your brain power? Coffee? Naps? Share in the comments!Reference: Putting brain training to the test. (2010) Owen, AM et al. Nature 465:775-8.

Thursday, June 3, 2010

At the end of June, I will once more ride my bike from Vancouver to Seattle as part of the Ride to Conquer Cancer. In the weeks leading to this event, I log many, many kilometers on the saddle and inevitably my thoughts wander to cancer biology (and sometimes to the excruciating pain emanating from my behind). What triggers cancer? How can cancer be prevented? Why are some cancers like breast cancer more prevalent in industrialized countries? While researching the question, I came across a most unsuspected potential risk factor. I’m especially excited about this piece of relevant science because for once I won’t be writing about how eating healthy and sleeping more can cure all your ailments.

In the study, researchers took groups of female rats and exposed each group to different intensities of white light during the dark phase of their daily cycle (typical lab rats live in a programmed 12-hour light/12-hour dark cycle). After two weeks of this night cycle disruption, the researchers implanted a tumor (derived from human breast cancer tumors) in the female rats, and continued on with the night cycle disruption for many weeks. By the end of the experiment, the rats that had been exposed to the strongest intensity of light showed a marked increase in tumor growth rates. The brighter the light at night, the bigger the tumor.

Ok, light at night makes tumors grow faster, but can too much light be the cause for cancer? To answer this question, it’s best to turn to studies in humans. There is convincing evidence that women who work night shifts have a significantly higher risk of breast cancer. As well, women with the brightest bedrooms also have a higher risk of breast cancer. Scientists believe the reason for these correlations is a molecule called melatonin. At night, in the dark, your body produces melatonin, which is a very effective anti-cancer molecule. Several studies have looked at this link in more detail and have shown that melatonin can block the development and the growth of tumors in non-human models of breast cancer.

The light-cancer link is gaining interest, and researchers even started sprucing things up by using a catchy acronym, LAN (for light-at-night), so it’s something to keep in mind. Based on this research, I’ve decided to break my habit of flicking on the lights for my midnight nature calls. Would this habit necessarily give me cancer? No. But flicking on the lights does interrupt my production of melatonin, and on top of being an anti-cancer molecule, it’s also a powerful antioxidant. So I’m just trying to put all the chances on my side. That being said, I’m running into a different problem, which is waking up everyone in the building when I stub my big toe on the door frame. Nobody said staying healthy was easy…

About Me

Dr. Julie is an Assistant Professor of Neurology at the National Core for Neuroethics and the Djavad Mowafaghian Centre for Brain Health at the University of British Columbia. She holds a PhD in Neuroscience.