Image credit: Henry Meynell Rheam [Public domain], via Wikimedia Commons For students across the world, and especially at Berkeley, sleep seems to be a commodity that is scarce. I know that many of us are willing to risk hygiene, public embarrassment, as well as a healthy social life for the chance to eliminate a […]

For students across the world, and especially at Berkeley, sleep seems to be a commodity that is scarce. I know that many of us are willing to risk hygiene, public embarrassment, as well as a healthy social life for the chance to eliminate a couple hours of sleep debt. It’s interesting how people speak colloquially about sleep in such economic terms, and you have to admit – the idea of sleep as being a consumable resource with a daily requirement is appealing. The ability to “go into debt” by not getting enough sleep, as well as having the opportunity to “repay ones debt” by sleeping more than our daily requirement, are all concepts that our society, ever obsessed with quantitative data, are happy to embrace. However, is there validity to this school of thought, or is it simply an ever too simple reduction of a physiological process? Our knowledge of sleep is extremely limited. In an interview with National Geographic’s, Dr. William Dement, co-discoverer of REM sleep and co-founder of Stanford Sleep Medicine Center says “As far as I know, the only reason we need to sleep that is really, really solid is because we get sleepy.”

What we do know, is the obvious: researchers have published results in the journal Sleep that “cumulative increase in performance lapses across days of sleep restriction”. The presence of a collective sleep debt is obvious, but is it possible to pay it all back? Does our body work as a literal bank, where if we are 10 hours in debt, we will continue to feel its effects in the future?As it turns out, this question is much more difficult to answer.

One researcher, Dr. Paul Shaw of Washington University in St. Louis, has turned to Drosophila melanogaster (the fly) as a model organism to study sleep. A protein named Amylase, found in the saliva of both humans and the fly, seems to be a quantitative indicator for measuring sleep deprivation. According to Shaw, Amylase’s salivary activity, as well as its concentration, is highly correlated with sleep drive. Identifying a biological marker such as Amylase is only the first step in the struggle to understand sleep debt. Researchers are currently looking for other markers to validate the use of Amylase. Current approaches to studying sleep deprivation rely on subjective tests, and adopting a more quantifiable approach is the first step to helping the sleep-deprived population of the United States. The CDC found that “among 74,571 adult respondents in 12 states, 35.3% reported <7 hours of sleep during a typical 24-hour period, 48.0% reported snoring, 37.9% reported unintentionally falling asleep during the day at least once in the preceding month”. These statistics are harsh, but further emphasizes the importance of catching up on sleep. Though your body’s need for sleep may not be as well defined as a bank account, catching up on lost sleep is still a good way to recover from a long week of midterms.

It’s pretty clear that most Americans actually believe in vaccines. A Pew research poll from January 29 of this year found that 68% of American adults think that childhood vaccines should be mandatory, so it’s safe to assume that most everyone thinks they are efficacious at preventing disease (which they are). But there are a […]

It’s pretty clear that most Americans actually believe in vaccines. A Pew research poll from January 29 of this year found that 68% of American adults think that childhood vaccines should be mandatory, so it’s safe to assume that most everyone thinks they are efficacious at preventing disease (which they are). But there are a large number of people who don’t vaccinate their children or themselves for various reasons — such as fear of mercury containing compounds, distrust of pharma companies or the government, and a (non-reproducable and most likely false) link to autism.

Since the recent measles outbreak in Disneyland in California, the vaccine righteous have had a field-day poking fun at those who refuse vaccines. This has taken the form of saying that declining a vaccine is equivalent to having the breaks removed from your car, and general name calling. While people who satirize those who disagree with them may feel clever or be filled a warm feeling inside for telling someone they’re stupid, they’re not helping their case — they’re being counter-productive and are actively discouraging people from getting vaccines. I’m not being facetious; a Scientific American report (originally published in the journal Pediatrics) last year found that after doctors told parents that there was no link between autism and vaccines, parents were less likely to vaccinate their future children than before being told so. This is in spite of the fact that they knew that vaccines were less harmful than they originally thought.

Stop being condescending. Stop being a jerk. That’s no way to convince someone that you’re right or that they should do what you say. Listen to their concerns instead, go home and think about it, and then respond in a thoughtful and meaningful way. These people are intelligent and concerned about the health of themselves and their children. The purpose of this article is to listen to those concerns and respond to them — it’s a starting point for discussion, and while it might not convince everyone to get a vaccine, it should point them in some directions where they can continue their research, and hopefully conclude that the benefits to themselves and most importantly, their children, come as a result of getting vaccinated.

Many people (including friends of mine) don’t want to be exposed to the mercury containing compound, thimerosal, that is found in some vaccines. To start any conversation about toxicity, one needs to talk about dose. Toxicity is a dose dependant property. Water can be toxic if you drink too much of it. Water has actually killed people; the condition is called acute hyponatremia and is caused by overhydration. Chemicals generally perceived as “toxic” means that small doses can be harmful, though anything can be harmful in a large enough dose. I wrote a previous blog post on metallic mercury, which has a hard time killing you, though it still can be dangerous. But the mercury in thimerosal is organic mercury, which can be pretty nasty and generally bioavailable. So why is it used in vaccines? The FDA classifies it as a preservative, because it can kill bacteria (such as staphylococci, or staph) that can contaminate vaccines and infect you once you are injected. Thimerosal is used to kill bacteria because it’s toxic, and if it’s injected in you, maybe that’s less than ideal. Which is why the FDA is working with companies to remove thimerosal from flu vaccines for children (many formulations already are mercury free). Thimerosal also is not used in MMR vaccines. In a similar vein, the FDA also recommends that pregnant women and young children avoid eating swordfish (which is delicious), because there’s bioavailable mercury in it as well. The dose of mercury in vaccines is at most 25 micrograms, compared to 147 micrograms for a scrumptious 4-ounce swordfish steak, and the lethal dose of thimerosal is 75 milligrams per kilogram of patient weight. That is, the amount of mercury present in a vaccine is around 0.003 % of a lethal dose for a 25 pound child, six times less than a fish steak*. Many vaccines now have alternate, milder preservatives without any metals, or no preservative at all. The other preservatives that are currently used are phenol, benzethonium chloride, and 2-phenoxyethanol, which range from one quarter to one-sixteenth as toxic as thimerosal according to lethal dose data (acute toxicity data can be found in section 11 of the material safety data sheets, which are linked). The way I like to think about the preservatives is as follows: before a doctor gives you a shot, they will take an alcohol swab to your arm to kill any surface bacteria that could infect you. The preservatives work the same way — they kill any bacteria that could have potentially contaminated the vaccine, making it safer. The chemicals used as preservatives are in doses well below the toxic range for humans and are only harmful to bacteria, that is, they are safe for all intents and purposes. The problem with using naturally occurring antibiotics as a preservative is that some bacteria are immune to almost all of them, like MRSA, our (not-so) friendly staph bacteria that has contaminated vaccines before preservatives were used.

I get it. Chemicals can be scary. But the thought of a child or loved one dying from an easily preventable disease scares me much more.

As far as trust of big pharma and the government, that’s another issue. Big pharma has profits to make, and the government has a history of telling people what to do. That’s an image, communication, and credibility problem. But if you really think about it, the people who go into medicine to figure out ways to combat infectious disease actually have their hearts in the right place. They aren’t trying to make a magic pill that will allow you to lose weight without diet or exercise. They are trying to find a way to boost your own immune system. That’s what vaccines are — they expose you to a weakened version of the virus or just the protein coat of the virus, to teach your body to recognize it and kill it in case you do become infected. There’s no hocus-pocus, or wand waving; vaccines are effective because they work through natural pathways. They don’t fight diseases for you, nor do they just treat symptoms–they simply allow you or your children to fight diseases more effectively. So:

Is big pharma out to make a profit? Yes.

Do vaccines work? Yes.

Is it OK that big pharma makes money off of an effective treatment? I like to think so.

Another question is about the government — does the FDA do a good job at regulating the companies if they step out of line? When Upton Sinclair’s publication of The Jungle in 1906 exposed unsanitary conditions in the meatpacking industry, the Bureau of Chemistry (later the FDA) was created the same year to deal with this. If vaccines have chemicals that cause illnesses that hurt children, politicians would be lining up to denounce the companies and the FDA would be working overtime to shut them down. There’s nothing more appealing to a politician than saving the children, and at the end of the day, almost all politicians support vaccines, even in the GOP which is wary of any form of government intervention. That’s a pretty good barometer of their perceived (and actual) safety.

As for as the alleged relationship between the MMR vaccine and Autism relationship is concerned, that has been dealt with before. Perhaps the most powerful and concise treatment of it that I have seen is a 90 second clip by Penn and Teller, which I highly recommend (contains explicit language). Right now there is no known cause of autism according to the Autism Society of America, and since many vaccines have removed mercury preservatives some years ago and children still develop autism, (and unvaccinated children can also develop autism) there’s really no good reason to believe in a solid connection between the two. I know it can be hard for parents of autistic children looking for answers. I hope they find those answers soon, but they will have to lay the blame elsewhere.

For those of you who already think everyone should be vaccinated, take the time to listen and productively engage with your friends or loved ones who don’t want to be vaccinated, because you actually care about them. They’re more likely to listen to you than to a doctor, or a talking head on TV, or even a blogger on the internet.

For those of you on the fence about vaccination or those of you who are disinclined to vaccinate yourself or your children, all I can say is that vaccines save lives and vaccines can save your children’s lives. Keep doing research, and keep an open mind.

Turtles are stereotyped as moving so slowly that you might not think about the fact that turtles can move quite quickly, especially when they have somewhere to go! The painted turtle (Chrysemys picta) is one of the most common turtle species native to the United States. These reptiles primarily hang out in fresh-water ponds and eat […]

Turtles are stereotyped as moving so slowly that you might not think about the fact that turtles can move quite quickly, especially when they have somewhere to go! The painted turtle (Chrysemys picta) is one of the most common turtle species native to the United States. These reptiles primarily hang out in fresh-water ponds and eat watery foods like algae, plants, and fish. But how would turtles cope if their home-pond disappeared? Given the major climate changes that are happening at a rapid pace, it is often difficult to predict the repercussions of these changes until it is too late. But once in a while, a study site gives scientists the perfect opportunity to test hypotheses about how animals respond to habitat change.

A new manuscript “The Role of Age-Specific Learning and Experience for Turtles Navigating a Changing Landscape,” published this month in Current Biology, details just what happens to turtles that lose their habitat and lose their way. The results suggest that turtles have a critical period for learning information about their environment, which could have long-term repercussions on their behavior and survival. I corresponded with one of the study’s authors, Timothy Roth, to get more details.

A turtle with a radiotransmitter attached.

Start with a rapidly changing environment – say a pond that gets drained yearly. That’s exactly what Drs. Roth and Krochmal used for their study site, in a project that spanned five years. Two ponds on a farm in Maryland were fully and quickly drained (within 24-hrs) once a year to manage the local bird population and maintain wetland health. But many turtles use these two temporary ponds and must find a new water source each year when they are drained. In a sense, this drainage creates a problem the turtles must solve – adapting to a changing landscape to find a new place to hang out.

Radiotransmitters were glued onto 60 resident turtles that grew up in the surrounding area. The radiotransmitters allowed the position of the turtles to be recorded at least three to four times per hour. Timothy Roth notes, “Contrary to popular belief, these turtles can move relatively quickly when they want to. People have the general impression that turtles are slow. But this is not always the case.”

The red paths show the movements of resident turtles from the drained ponds (T) to permanent ponds (P). Yellow paths are a sample of translocated adults. Image courtesy of Science Direct/Current Biology.

One fascinating thing that happened is that ALL 60 of the resident adult turtles and a handful of local juvenile turtles used just one of four pathways to get from the drained pond to a new pond. The results took the authors by surprise, “We never expected these movements to be so accurate and repeatable. It is really quite amazing.” The turtles took around sixteen hours (on average) to relocate themselves to their new home. According to Roth, “We do not know how the paths were originally created or why those paths were used over others. What we can say is that the paths are not those of least resistance nor are they the shortest or straightest.”

These adult turtles, once establishing a route that worked for them, just stuck to it over the years; all turtles that were tracked for multiple years used the same path each year. On their first journey, the resident juveniles could have been following cues left by adult turtles, such as trails left by shed turtle skin and feces. “Following the leader” is not a likely explanation, as multiple turtles were never seen leaving the pond together or taking a route simultaneously.

So what happens if you take a naïve turtle that grew up in another pond, and plunk them down in your study site? Well, you might think, if turtles just use local cues, they’d pick up on the trails of the resident turtles, and use one of those four established turtle paths. And that’s exactly what translocated juvenile turtles did.

However, for turtles that were over four years of age, this was not the case. Of the thirty naïve adults, NONE of them were able to successfully find a new pond with water, even though these permanent ponds were less than 1 km from the drained ponds. Instead, these adult turtles wandered in rather circuitous paths, sometimes even crossing the established resident turtle routes, but never picking up on them. The study authors had to abort this part of the study after three weeks to return the naïve turtles to their old pond, as the turtles were losing significant amounts of weight and had not yet found a source of water.

Scientists are still researching which cues turtles rely on for navigating through a complex environment (like this one).

The results suggest some form of crystallized learning happens during the early part of a turtle’s life. Early experience may be dependent on environmental cues, but later navigation depends on early experience, rather than environmental cues. I had to ask, can you teach an old turtle new tricks? Roth asserts, “Adult turtles certainly learn. Other studies in other labs train them in classic Skinner boxes and the like. However, in our ecological situation, there seems to be no learning of paths by any individual over the age of three. There is likely a developmental constraint on learning, perhaps as a function of changing neural functioning.”

Like all good studies, this one leaves us with just as many questions as answers. Why did the turtles choose those paths in the first place? Could stress be affecting the adult translocated turtles somehow? Roth and Krochmal will continue to examine the possible mechanisms behind this route learning during the critical period, including a deeper examination of the use of UV cues in the field. They also hope to also examine the effects of stress on turtle navigation as part of their continuing research.

This study demonstrates the importance of early learning in the turtle to their adaptability to a changing environment. Juveniles appear to adapt to new environs, but it’s not clear this would be true for adults. For the adult painted turtle’s sake, we should all hope they don’t find themselves a stranger in a strange land, as these reptiles just might wander in circles and die before finding a new water source!

All images courtesy of Anja Ulfeldt Getting his ticket to the Exploratorium on the opening night of the new exhibit Cognitive Technologies, Berkeley neuroscientist Jack Gallant was asked if he thought it would work. “It better work,” he said. “It’ll be amazing if it works.” Gallant’s actual words were, characteristically, a smidge more colorful. […]

Getting his ticket to the Exploratorium on the opening night of the new exhibit Cognitive Technologies, Berkeley neuroscientist Jack Gallant was asked if he thought it would work. “It better work,” he said. “It’ll be amazing if it works.”

Gallant’s actual words were, characteristically, a smidge more colorful. After all, his students had worked hard for weeks on the exhibit, the Exploratorium’s very first created by an outside group. They joined forces with the Berkeley-based Cognitive Technology group, an incubator whose stated goal is to create tools to better understand, extend, and improve the human mind, and m0xy, a community-based center for industrial arts in Oakland.

On entering, guests are fitted with headsets that read their brain waves. At the Calibration Station, they open and close toylike lotus sculptures by switching among targeted brain states like relaxation, excitement, and focus. Guests can then explore gorgeous brain data visualizations with hand gestures, solve an illuminated sculpture-puzzle by focusing, or learn to control a robotic arm by imagining movement. “No Magic Here,” a plaque announces, explaining that the EEG (electroencephalography) headsets pick up brain waves from the surface of the head.

The room is aglow with LEDs and aflutter with excitement. In a corner, an armchair invites headset-wearing guests to learn to change their emotions on demand. Elsewhere, people don Oculus Rift virtual reality headsets, one of which allows them to navigate the brain’s maze of blood vessels from the point of view of a clot-busting white blood cell. Some people instead take advantage of opening night to pick the brains of the experts.

Two such experts are James Gao and Natalia Bilenko from the Gallant lab. They were recruited to work on the exhibit by Stephen Frey, director of Cognitive Technologies and a fan of Bilenko’s work on Dr. Brainlove.* For the Exploratorium, she and Gao worked to make interactive visualizations of the brain with Gao’s software, Pycortex.

The Gallant lab has been making splash after splash in the world of neuroimaging. They have worked out how to perform what is essentially a scientific magic trick. By showing people pictures and videos while they have their brain scanned, they train computer algorithms to recognize the ways in which visual information is encoded in the brain. Then, like a magician pulling the correct card out of a deck of cards, their algorithms can decode brain activity in future scans to guess what people are seeing. It’s not quite mind reading, but for many, it sure seems close.

“I’m hoping to teach people a little more about fMRI (functional magnetic resonance imaging) research, and how it really isn’t magic,” says Gao. Huh. A whole nightclub’s worth of blinky lights in that room, and no magic. Go figure.

“With fMRI, we’re not even recording from neurons,” Gallant would want me to note. Gallant is notoriously precise in describing his work. “It’s blood flow. We’re measuring blood flow,” he says, often. True, fMRI tracks changes in blood flow in the brain, but fortunately, we know that rushes of blood to the head are just smoke signals from an underlying neuronal fire.

There is, however, a more fundamental limitation to telling someone what they are thinking: That person will always know better than you. There are many reasons why this will always be so, starting with the fact that subjective experiences are generated by a brain containing more connections than the entire internet.

Another reason is that with brain recording techniques, there is a tradeoff between convenience and accuracy. The less invasive a technique is, the muddier the signal it gets. And since nobody just volunteers to have their skull cracked open right there on the museum floor, we must take what we can get. The Exploratorium exhibit relies on lightweight, wearable EEG headsets donated by Muse. When a guest leaves, museum attendants can simply sterilize their headset like a pair of bowling shoes, recharge it, and hand it off to the next neuroadventurous guest to walk in. Great for walking around, not so great for clean signal. This tradeoff is a major bottleneck in moving forward with much-hyped, futuristic projects like neurogaming, for instance, turns out to be literally getting it through our thick skulls.

Overhead at the Exploratorium, guests’ brain waves were plotted as part of the Cloud Brain project. Two museum-goers stood below, regarding the screen with no small amount of confusion. “Does this mean I’m not really thinking anything?” one asked. “I must be at like a yoga retreat now or something,” said the other.

Although it may not seem like it, these guests were on to something. The mismatch between their lived experience and their recorded brain waves could have been a hardware problem, rooted in the limitations of the EEG headsets. But let’s pretend for a moment we did have perfect data. Still, to really understand what we’re recording at all, scientists have to make connections between different sources of evidence to make inferences about what is really going on. “Our results are based in tested reality,” says Gao. “For example, damage in brain areas that show an fMRI response to seeing faces actually causes real deficits in face perception.”

Interpretations of the data rely on a boiled-down version of a vast literature–we are standing on the shoulders of giants. Any individual’s headset experience is classified as meditation, excitement, or focus based on what’s been seen in past experiments. As a single datapoint at the end of a very long running average, your mileage may vary. They did warn you there would be no magic.

John Naulty, a member of the Cognitive Technology group, spent opening night affixing electrodes to people’s heads to allow them to control a robot arm. “We wanted to get people to think about this kind of technology, see where current prototypes are, and if they’re interested, help us make them better,” says Naulty. “There is a lot of room for improvement.”

And yet, many of the technologies in the exhibit are at the bleeding edge of what is possible in the laboratory, let alone in the wild. Guests get the first-person experience of putting on a headset and seeing their mental states projected out onto the world. They get to ask whether what a computer decides is calm is really calm, whether their focus is really focus. Guests line up the data with their own best working models and give them both the stink-eye, and before they know it, they are playing scientist. They leave with a new perspective on where we stand in understanding and controlling the brain. We know a lot, but we don’t know everything.

Hardware and software issues aside, what I found most impressive was the scientists’ sheer dedication to making these technologies accessible. The exhibit came together beautifully in the end. Volunteers from UC Berkeley, the Cognitive Technology group, and m0xy will continue to man the exhibit for the next month. “These people are passionate, and it’s contagious,” says Naulty.

“The biggest challenge for making the demo museum-ready was making it free-standing for a whole month with no intervention,” says Gao. “The interface has to be easy enough to use that someone can walk up with no previous knowledge and be able to work the exhibit.”

“James ran desperately into lab today to grab something or other off his computer,” Gallant helpfully confirmed on opening night.

I once read a super impassioned screed on the Muppets’ shift towards computer animation, and believe it or not, I think it’s relevant to the experience of interacting with these technologies in a museum and not just on youtube. Of the irreplicable thrill of the old-school Muppet experience, Elizabeth Stevens writes, “As viewers, we know on some fundamental level that what we are seeing is not CGI, but a room full of exhausted, exhilarated professionals with their arms stuck inside of puppets.”

Cognitive Technologies creates a similar thrill, which one of the exhibit’s creators summed up beautifully in a Facebook post after opening night. Mike Pesavento, who with Bilenko gave Dr. Brainlove her EEG functionality, had worked to bring beautiful electron microscopy images and an Oculus Rift demonstration to the Exploratorium. He writes as only an exhausted, exhilarated science hero elbows-deep in Arduino boards could:

An amazing brain trust of artists and infrastructure and software devs and neuroscientists and hardware hackers got together 8 weeks ago and created an exhibit at the world famous Exploratorium. Bugs were being fixed in the last seconds. A superhero rewrote an entire backend in the last 48 hours to support 20 eeg headsets simultaneously. Partners from around the US reached out to help support this exhibit. Nerves were frazzled, tech didn’t work as expected, no one ate (there was whiskey!), and yet it came together. So amazingly happy to be a part of creating the Cognitive Technologies exhibit at the Exploratorium.

Image credit – Wikimedia – public domain A report published last week by New Pew Research Center polled both the general public and scientists’ views on science and society. Although the public generally appreciates and regards science highly, when it comes to actual scientific topics, such as climate change and GMOs, views diverge. Pew surveyed […]

A report published last week by New Pew Research Center polled both the general public and scientists’ views on science and society. Although the public generally appreciates and regards science highly, when it comes to actual scientific topics, such as climate change and GMOs, views diverge.

The U.S. has traditionally held science and technology in high esteem. But waning NSF funding and certain Republicans attacks on government funded research, have lead some scientists to question their role in society. Science and technology, however, is embedded in our everyday lives, from flu vaccines to solar panels. These polls suggest that the American public still recognizes and celebrates that. Which is great news and a relief for all the scientists out there.

Unfortunately this starts to unravel when it comes to opinions on more technical topics. For example, 57% of U.S. adults view GMOs as unsafe to consume, compared to 11% of AAAS scientists. Similarly only 28% of U.S. adults believe it’s safe to eat food grown with pesticides, whereas 68 % of scientists consider it safe. Of the 13 topics in this section, 11 had gaps between U.S. adults and scientists over 10 points.

It’s not totally surprising that opinions would diverge on more technical issues. Scientists by definition have more training and expertise in the areas folks are being asked about- it stands to reason they have extra insight. What’s interesting though is that in areas where public opinion conflicts with scientist’s views, the public also thinks that there’s more controversy among scientists.

For example let’s look at climate change. Scientists and the public differ pretty widely here: 50% of U.S. adults believe that climate change is caused by human activity, compared to 87% of scientists. You might notice that the scientific consensus here is only 87%, not the oft-cited 97%. The Pew study, however, is polling scientists across many fields. The 97% includes only experts in climate science but the 87% includes scientists in unrelated fields as well.

Depending on how you define it then, scientific consensus is between 87% and 97%, however, U.S. adults believe the issue to be far more controversial than that. 37% of U.S. adults believe that scientists do not agree that climate change is caused by human activity, 57% believe scientists do agree and the remainder were unsure. It’s not that the public doesn’t trust or believe scientists; they just believe that scientists themselves are unsure. Unfortunately, the study doesn’t report the amount of overlap between the 57% that believe scientists agree and the 50% that believe climate change is human-caused. I’d speculate the overlap is considerable though.

The gap here is not just due to scientist’s expertise. Again, given the breadth of questions on the survey, the scientists polled are all answering questions about issues outside their main expertise. But on the issue of climate change, for example, the broad scientific community lands much closer to the expert consensus than the general public.

This gap is pretty problematic when it comes to enacting effective policy surrounding scientific issues. We see this at play in current government: only 15% of scientists said that the best science guides government regulations on land use. These numbers are slightly better depending on the area (58% of scientists say the best science guides regulations for new drug and medical treatment), but it’s pretty dismal considering the wealth of excellent science available. Naturally there are many other factors at play here (large corporate interests to name just one) but increasing public understanding of scientific issues could go a long way. 84% of scientists say lack of public understanding is a “major problem.”

Remember though, the public still likes science. This isn’t a battlefield. It’s not that most U.S. adults don’t believe or trust scientists but that they simply don’t know what scientists actually believe. Both scientists and public agree that U.S. K-12 STEM education needs improvement. The takeaway here is that to close this gap, we need better science outreach, communication and education. This is where the bottleneck is. So let’s make it wider. If you’re reading this blog, it’s likely you’re a scientist in some capacity. Get out there and communicate to the world all the awesome, helpful stuff you know. In case you need a few ideas, here’s a few:

Volunteer for STEM outreach in schools. Bay Area Scientists in Schools (BASIS) is a great place to start if you’re a Berkeley affiliate.

Image credit – Allan Ajifo A baseball player steps up to the plate. It’s the bottom of the ninth inning, with the bases loaded. The pressure is on. One, two, and finally three pitches go by: he chokes. He’s not alone in his pain. Choking under pressure is a phenomenon as universal as it is […]

A baseball player steps up to the plate. It’s the bottom of the ninth inning, with the bases loaded. The pressure is on. One, two, and finally three pitches go by: he chokes.

He’s not alone in his pain. Choking under pressure is a phenomenon as universal as it is mysterious. Humans, paradoxically, often perform worse when the stakes are higher. What causes our performance to tank precisely when it matters most?

Taraz Lee, who recently earned his PhD in psychology at UC – Berkeley, decided to find out what causes this frustrating limitation on human performance and how to prevent it. Now working with Scott Grafton at UC – Santa Barbara, Lee uses functional magnetic resonance imaging (fMRI) to scan people’s brains as they play a game. In this game, people used both hands to guide a green line into a red box–they were told to “help the snake get the apple.” They had a chance of receiving $5, $10, or $40 if they were successful.

People did fine when playing for small incentives. But with $40 on the line, they choked. Lee and Grafton found that in these high-pressure situations, a brain area called the dorsolateral prefrontal cortex (dlPFC) became hyperactive. At first, it seemed like this hyperactivity was causing people to choke. The dlPFC is involved in things like planning, memory, and reasoning. Maybe, Lee thought, choking was really just a form of overthinking.

But then he took a closer look. He found that the dlPFC was not just becoming more active, but also becoming more synchronized with activity in the primary motor cortex (M1), the area that controls muscle movements by sending commands down the spinal cord. Strangely, the more people managed to sync up the activity in these two areas, the less they choked.

“I thought I must have mixed something up in my code because it was the opposite of what I expected,” says Lee. “I thought that excess prefrontal control was detrimental to performance. It can be difficult to interpret fMRI results, and you have to be careful in drawing conclusions about relationships with behavior.”

Vikram Chib is a professor of biomedical engineering at Johns Hopkins University who also studies choking under pressure. Chib, who was not involved in Lee and Grafton’s study, says, “It could be that these signals from the dlPFC modulate self-control in some way and this is what protects people from having decreases in performance.”

Chib’s work has shown that people who worry more about losses in life and how to avoid them are, surprisingly, the ones who choke more often. Meanwhile, Lee and Grafton found that loss-averse individuals had poorer performance across the board, not just under stress. However, one other personality trait predicted choking under pressure: impulsivity.

“Our results suggest that more impulsive individuals have failures in top-down control in stressful situations,” says Lee. “I would think that people that have low loss aversion would exhibit more impulsivity, but this would need to be tested to be sure,” says Chib. And so, we are left with a puzzle: is good performance a matter of controlling our impulses, or learning not to fear losses?

In any case, it is unlikely that solution will be one-size-fits-all. Pressure can be a good thing for some people, while others fall apart. “We think the chokers were basically maxed out at the $10 incentive level,” says Lee. “So it’s not that there isn’t any M1-dlPFC communication. It’s just that they can’t engage any more than they already are.”

Some of Lee’s current work focuses on training chokers to become champions. He has set out to prove the importance of the dlPFC for protecting against choking. Using a technique called transcranial magnetic stimulation (TMS), he disabled the dlPFC. “I turned champions into chokers,” says Lee.

While this result makes it clear that the dlPFC is critically important, we still don’t know what to do about it. For now, Lee and Grafton’s results suggest that it’s not how much you think, but how you think, that guides you to successful performance. Someday scientists may be able to sculpt these thought patterns using increasingly popular brain stimulation techniques like transcranial direct current stimulation (tDCS), which has already been shown to enhance learning in some situations.

While this is currently the stuff of science fiction, Lee is up to the challenge. “You’re talking to a guy who loves TMS,” he says. “I just like feeling like a mad scientist!”

I read Jurassic Park by Michael Crichton recently, and as someone who had never seen the movie, I was pleasantly surprised by the plausibility of the science he used to reconstruct his dinosaurs. The genetic engineering was reasonably accurate, and the dinosaur behavior was consistent with the current models. For those of you who have […]

I read Jurassic Park by Michael Crichton recently, and as someone who had never seen the movie, I was pleasantly surprised by the plausibility of the science he used to reconstruct his dinosaurs. The genetic engineering was reasonably accurate, and the dinosaur behavior was consistent with the current models. For those of you who have not read it, I highly recommend it.

Michael Crichton was highly trained in the sciences–he went to Harvard medical school, and did a post-doc at the Salk Institute. Other sci-fi writers are scientifically literate or are scientists themselves, such as Isaac Asimov, who was a biochemistry professor at Boston University. But many authors come from different backgrounds, and not all are scientists. I wanted to know about how authors approach the creation of their worlds. I did some research to find out what science fiction writers had to say about how much of their writing is actually inspired by science, and how much is fiction (or pure fantasy).

What I found was pleasantly reassuring. While the main purpose of writing is to tell a good story, many writers feel that the burden is on the author to have good, or at least, plausible sounding, science.

Robert Heinlein, author of classic SF books such as Stranger in a Strange Land and Starship Troopers, writes in his essay, On the Writing of Speculative Fiction, that while it is good and well to create a story that takes place in a fantastical and fictive universe, “as a result of this new situation, new human problems are created–and our story is about how human beings cope with those new problems”. He is a stickler to the science, though, and demands that any SF author needs “(a) to bone up on the field of science you intend to introduce into your story; (b) unless you yourself are well-versed in that field, you should also persuade some expert in that field to read your story and criticize it”.

Taking it one step further, Poul Anderson writes about one author who tried to make a planet circling a class B (one of the hottest) star with an atmosphere composed of hydrogen and fluorine gas. Under these conditions, the molecules would “react promptly and explosively“, rendering a world incapable of supporting life (it would be an hydrofluoric acid bath…). Mr. Anderson cites this as typical of writing from authors whose only worlds are “a world exactly like our own except for having neither geography nor history, or else it is an unbelievable mishmash which merely shows us that still another writer couldn’t be bothered to do his [or her] homework”.

Fortunately, Mr. Anderson is equally encouraging and nurturing to new writers as he is curmudgeonly to the old, poor ones, and in his essay, The Creation of Imaginary Worlds, he provides lots of charts and graphs to help writers trying to create new environments. He spells out how to choose the size of a star, and what the expected luminosity (or energy output) would be, and the distance a planet would have to be from that star to support life without it being burnt to a crisp. But this is really only the start. After the planet mass and distance is decided upon, one needs to calculate the year of this planet, as it can affect seasons, and therefore the “rhythm of life” on the planet. But wait, there’s more! If the luminosity of the star is more than our sun, a human space traveler might notice that the shadows on this new planet would be sharper than he or she is used to seeing. He goes on for pages, detailing the intricacies of planet creation, many of which may never appear in the final story, but should influence the author’s formation of life, behavior, and habitation on their planet.

My favorite essay was one by Hal Clement, called The Creation of Imaginary Beings, because he spent some time on chemistry, which is near and dear to my heart. He talks about why it is probably more reasonable to stick to life that is based on carbon instead of, for instance, silicon (silicon has a nasty habit of coordinating to oxygen and forming hard, rigid, crystalline substances, like quartz). But most importantly, he waxes on the importance of hydrogen bonds (found in almost all biologically relevant structures, like DNA and proteins) which can direct self-assembly and reconfiguration of molecules, both properties needed for life. The danger, he implies, of changing life (and the chemistry that makes it) too radically makes the story unbelievable, and therefore bad for anyone who has been exposed to science, even if the reader is doing his best to suspend disbelief.

After reading these essays, I began to think more deeply about the interplaying factors that allow us to live and exist on Earth and why things are the way they are. In order to create a new world, a writer has to understand our world first–because ours is real and it works–Earth is the archetype of a known world that supports life. Science fiction merely takes the variables that we know to be important, changes the values a little, and sees what stories and relationships are important to that believable, but mutated, world.

References:

The essays referenced within this article are all found in the following anthology:

The UC Berkeley graduate student run group Bay Area Women in Machine Learning and Data Science will be hosting their first meeting next Wednesday, January 28th at 5:30 PM in 560 Evans Hall. For more information, see this meetup posting or the group page. Speakers include Dr. Laura Waller, a UC Berkeley assistant professor in […]

The UC Berkeley graduate student run group Bay Area Women in Machine Learning and Data Science will be hosting their first meeting next Wednesday, January 28th at 5:30 PM in 560 Evans Hall. For more information, see this meetup posting or the group page.

Speakers include Dr. Laura Waller, a UC Berkeley assistant professor in the EECS department, Dr. Marian Farah, quantitative researcher at Climate Corp., as well as Katrina Glaeser M.A., a product analyst at Pandora.

Image: Zebrafish embryo, 27.5 hours post-fertilization. Ho-Wen Chen. The evolution of vision was one of the most effective advances in the evolution of animal life: the vast majority of animals have eyes, and many animals, humans included, have made vision their go-to sense for perceiving the world. But light perception doesn’t always depend on eyes (think […]

The evolution of vision was one of the most effective advances in the evolution of animal life: the vast majority of animals have eyes, and many animals, humans included, have made vision their go-to sense for perceiving the world. But light perception doesn’t always depend on eyes (think of plants growing towards light in dark places). And even neuroscientists can be surprised to find light perception hiding in the most unexpected of places.

Drew Friedmann is a graduate student in Udi Isacoff’s lab at UC Berkeley who studies how the spinal cord of zebrafish wires up its neurons during development. One year back, he was hard at work trying to expand on some of the earlier findings of his lab. Using state-of-the-art techniques that allow scientists to observe individual neurons talking to one another, members of the Isacoff lab had identified the window of time in which spinal cord neurons begin to work in synchrony in day-old zebrafish, a first step in the young zebrafish’s development of swimming behavior. (Zebrafish are translucent until they are about 5 days old, making observation of their internal neural activity quite easy).

Activity of motor neurons in the zebrafish spinal cord, firing in synchrony a mere 18 hours after fertilization. Koichi Kawakami.

The spinal cord, common to all vertebrates, is famous for containing synchronized circuits of neurons known as central pattern generators (CPGs). The spinal CPG coordinates walking in humans, galloping in horses, and swimming in fish – all with merely indirect oversight from the brain.

The Isacoff lab had pinned down the earliest example of a CPG in young zebrafish, by visualizing the emergence of synchronized activity in spinal cord neurons, but as usual, the lab was hungry for more. Friedmann wanted to know how these young neurons pulled off talking to one another at such a young age, when their chemical synapses – junctions between neurons that use neurotransmitters, like dopamine or glutamate, for communication – had not yet formed.

In very young vertebrates, neurons often communicate without neurotransmitters. Instead, the nervous system uses electrical synapses to connect neurons. These simple electrical synapses, also known as gap junctions, are composed of physical tunnels between neurons that allow electric charge to pass directly from one neuron to the next. As a young vertebrate ages, many of its electrical synapses are replaced with more-complicated chemical synapses, which are the primary type of synapse in adults.

To tackle the question of which types of electrical synapses were wiring up the early spinal CPG in zebrafish, Friedmann began to genetically disrupt the components of gap junctions, in the hopes of preventing the CPG from forming. Gap junctions are made of proteins called connexins, but in zebrafish, there are 37 types of connexins. “I was genetically knocking out the 37 connexins, one by one, but I was at a dead end,” he says. No matter which connexin was removed, the fish still developed synchronized CPGs. Friedmann needed more information to figure out how these neurons were communicating.

He decided to pursue the sometimes-controversial pastime of any biologist who has hit an experimental brick wall: a genetic screen. Instead of starting with 37 connexin genes, he would measure the expression of all the genes that were active in the young zebrafish spinal CPG, and work backwards to figure out which ones were responsible for the CPG’s synchronized activity. “We had a genetically-labeled line that included some of the active cell types in the spinal cord at that age,” he explains, giving him the ability to physically isolate the CPG neurons from the spinal cord and analyze their gene expression.

Friedmann purified these neurons from the spinal cord and used a technique called RNA sequencing to measure the levels of expression of all of their genes. He then compared the gene expression from these particular CPG neurons to the gene expression of the entire spinal cord, “on a whim” that perhaps the CPG neurons were expressing unique genes (perhaps a particular set of connexins) in order to synchronize their activity at such a young age.

“I was looking exclusively at expression levels of connexins in the active population, looking for ones more highly expressed than in the rest of the neurons,” he recounts. “But then I noticed that gene number 16 in the list, really high up, was ‘blah blah… opsin.'”

Something clicked. Opsins are the proteins found in the rods and cones of our retinas that give us our sense of sight. (Remember, rods and cones are just specialized, light-sensitive neurons). When light hits an opsin in a neuron, it sets off a chemical reaction inside the neuron, which can either excite the neuron (making it send a signal) or inhibit the neuron (preventing it from sending a signal). One-day old zebrafish don’t have functional eyes. So what in the world was an opsin doing in neurons in the developing spinal cord?

That day, Friedmann had already booked some time on Rachel, a popular microscope at the Molecular Imaging Center at UC Berkeley, for “some other experiment for the connexin project that didn’t work out.” Now determined to figure out why a light sensor was present in the spinal cord CPG of day-old fish, he began flashing light on the young zebrafish while simultaneously videotaping the activity of their spinal CPG neurons.

Based on a single paper he had dug up that morning on this particular opsin, Friedmann guessed that the opsin should excite the CPG neurons that contained it in response to light. The day was drawing to a close, though. “I’m in there for 2 hours and I’m not seeing [a light response],” he recalls, “and I’m ready to give up and go home. Then I opened up all my movies and flipped through them one by one, rapidly. And I noticed that right after the light flash, in every single movie, there was never an event.”

He continues. “But there was always a little gap [in activity]. I compressed all the videos on top of each other, and then it was really obvious, that there was a gap after each light flash. And that’s how I realized it was inhibitory.”

Light somehow paused the synchronized activity of the CPG neurons, via this opsin in this spinal cord. Friedmann’s bold genetic screen, searching for connexins, had serendipitously uncovered a light response in the spinal cord of day-old zebrafish. But what did this mean?

Early zebrafish development, from 0 to 72 hours post-fertilization. Note how young zebrafish remain trapped in their embryonic chorions until about 24 hours. Ed Hendel, Wikimedia Commons.

A mere 24 hours after fertilization, young zebrafish spontaneously flick their tails, somewhat akin to the first steps of a young human baby. However, at this age, the fish are often still stuck in a clear sac known as the chorion, so they can’t actually swim. These tail flicks inside the chorion cause a behavior known as ‘coiling.’ Friedmann knew that synchronized activity in the spinal CPG of zebrafish was responsible for this coiling, so the necessary next step, he says, “was a trivial experiment with some day-old fish,” which were always available in his lab. Would shining light on young fish – pausing the spinal CPG – stop their coiling behavior?

One last piece of serendipity then fell in place. The Isacoff lab “already had a board of LEDs ready to go” on an array of 48 small wells, each containing a single young zebrafish, for a different experiment. Friedmann’s spinal opsin happened to be most-sensitive to light that nearly matched the wavelength of these LEDs (green light of about 504nm). Using this apparatus, he could simultaneously shine the correct wavelength of light to activate the opsin in these spinal cord neurons, and videotape the behavior of all the fish.

“I tried it the next day. That was when I saw it. I’m watching the movie on the screen, I’ve got 48 fish in the dish, and they’re all doing their thing, coiling in the dark,” Friedmann says. “Then I turned on the light. Coil coil coi—they stopped. And that’s when I realized. I’m going to graduate.”

Friedmann’s work elegantly showed that light – detected by the spinal cord – could put the brakes on the zebrafish’s earliest attempts to swim. Though he had set out to better understand how the spinal CPG becomes synchronized, he had instead stumbled across a non-visual photoresponse in the spinal CPG, a response that directly influenced behavior.

It turns out that non-visual opsins, that is, opsins that do not play a role in vision, do play important roles for other aspects of biology, most famously for setting our circadian rhythms. Beyond that example, though, there are non-visual opsins in the brain, where light is thought to never penetrate, as well as in the heart, lung, and pancreas, amongst other organs. While we know these opsins are present in all these organs, their functions remain elusive. Light perception, it turns out, seems to extend well beyond the eyes.

Spontaneous coiling behavior in a zebrafish embryo, still trapped in its chorion sac. Harold Burgess/Company of Biologists.

In the case of day-old zebrafish, Friedmann can only speculate why the fish stop coiling in response to light. Perhaps this reflex prevents them from revealing themselves to predators in the daytime, at an age when they are still confined to their chorion sacs, unable to escape. Cloaked in the darkness, or in the shadows of a lily pad, the fish are safe to practice their tail flicks as much as they please.

But beyond this discovery itself, Friedmann’s journey from connexins to spinal opsins shows that, even in this day and age, it can be tough to predict what a scientist is going to find when digging into some well-defined problem, like synchronized activity in the spinal cord. Scientists are used to experiments turning up empty, but every now and then, they unexpectedly strike gold (and live for those moments). Over a hundred years ago, Louis Pasteur summed up the need for scientists to confront the uncertainty of science head-on with this timeless quote:

So if you run into a discouraged scientist as the new year commences, remind her that chance has noted all her hard work and preparation, and that even in the 21st century, it’s still possible to accidentally discover as crazy a thing as a light-sensitive spinal cord in a day-old zebrafish.

The International year of crystallography has seen many innovations and exciting scientific advancements. Science has announced Rosetta’s Philae to be the breakthrough of the year. While we leave it to the big-wigs to round up the most exciting research across the globe in 2014, here is a smattering of some of the great science that […]

1. President Obama announced the BRAIN Initiative in 2013, offering up over $300 million to help scientists work towards lofty goal of mapping every neuron in the human brain. This August, it was announced that scientists at UC Berkeley and several other institutions would begin working on standardizing the neuroscience data format for Neurodata without Borders; given the tremendous influx of new data and large collaborations, the need for a common data format is pressing. In November, Neurodata without Borders held a hackathon to stir up ideas for possible new file formats and BrainFormat, developed at LBL, was among those selected for further investigation. BrainFormat is open-sourced. NERSC is also working with UC Berkeley scientists to create a data-sharing portal, the Collaborative Research in Computational Neuroscience.

NIH has also recently awarded three UC Berkeley scientists a collective $7.2 million to contribute to the BRAIN Initiative. John Ngai and his team will use new techniques to identify different types of neurons, and employ CRISPR to genetically engineer mice that have these neurons tagged for research. David Feinberg will use these funds to work towards developing increased resolution in MRIs. Finally, Richard Kramer and Ehud Isacoff plan to add photoswitches to neurotransmitters so they can be selectively turned off and on.

In September, researchers from UC Berkeley, UC San Francisco, and LBL met to discuss ideas for their new collaboration, BRAINseed. Six projects were selected, and focused widely on nanotechnology and optogenetics. Each institution in this tri-partnership has committed $1.5 million towards these research efforts.

2. After much hand wringing and heated debate it was announced in November that officials would reverse their decision to cut off funding to the Lick Observatory, the UC system’s only observatory (for now). The observatory is 126 years old and the world’s first mountain-summit observatory. Funds were originally to be cut from Lick and diverted towards the construction of a next-generation telescope in Hawaii, the Thirty Meter Telescope (TMT). TMT is currently being built on Hawaii Island, and is projected to be one of the most advanced telescopes in operation.

Lick has also undergone several upgrades. The Automated Planet Finder (APF) telescope has been watching the sky since January; making Lick the first robotic planet-finding facility. APF is the first telescope which can detect rocky, extrasolar planets that may support life. UC Santa Cruz has also received $350,000 to upgrade the Kast Spectrograph in the Shane telescope, which is used to study supernovae.

3. This year, several prestigious honors were bestowed upon Berkeley scientists. Five faculty have been elected to the National Academy of Sciences, three awarded the National Medal of Science, three received the NIH Innovator Award, four named AAAS fellows, and three named fellows of the National Academy of Innovators. Biochemist Jennifer Doudna (AKA khaleesi) and two teams of cosmologists lead by Saul Perlmutter and Adam Riess were named the 2015 Breakthrough Prize winners, subsequently celebrating their cool $3 million in prize money by attending a ritzy gala honoring their work in November. Doudna received her award for CRISPR/Cas9, Perlmutter and Riess accepted their award on behalf of the Supernova Cosmology Project and High-Z Supernova Search teams, who originally discovered the accelerating expansion of the universe.

What did you think was the most notable research done at UC Berkeley in 2014? Let us know in the comments below!