Tag Archives: UNC-CH Research

Post navigation

Our bodies are marvels of precise control, synchronization and design. Every one of our cells has the same genetic sequence, but we have many different types of cells – heart, muscle, lung, skin. Amazingly, our body has a mechanism to determine which cell is which even though they all share the same code. The field of epigenetics dives into this phenomenon. Epigenetics is a study of changes to DNA that does not change the actual sequence but modify it by repressing or activating certain parts of DNA. In short, epigenetics can reversibly turn genes on and off without changing the DNA sequence.

The genes in our body are like words that have to be spelled a certain way in order for them to work properly. All genes are made up of “base” molecules, which are assigned a specific letter (A, C, G, or T). These bases combine to form 3-letter “words,” or amino acids. Amino acids serve as the “words” that form the “sentences” or proteins in our body that govern all the biological processes necessary for life. However, none of these biological phenomena could be produced if there are misspellings in the genetic code. Mutations are a misspelling of the original genetic code through deleting, duplicating, substituting or inverting parts of a gene. Mutations are permanent changes to the DNA code which can be passed on from generation to generation. This is the cause of many heritable diseases.

For a long time, genetic changes were thought to be permanent, but reversible epigenetic changes were uncovered around 1950 and have led to an explosion of knowledge in understanding the human body.Conrad Waddington was the first scientist to propose the concept of epigenetics. He studied embryonic development and saw how an embryo gave rise to all the different types of cells, even though every cell had the same genetic sequence. He visualized this model with “Waddington’s landscape,” which used the analogy of a marble rolling down a hill into different troughs to represent the developing cell becoming a muscle cell, heart cell or any other cell.

The marble example that Waddington used to describe an embryonic stem cell becoming other cells.

Alternative splicing is one epigenetic mechanism that allows for cells to be able to choose multiple fates. This can happen all over the body, such as in thebrain, heart, and muscle. Our body has many genes, but we only use 2% of those genes to code for proteins, the other 98% are genes that help regulate the protein-coding genes. Alternative splicing is one way that we fully utilize the 2% of our genes that code for protein and accounts for our complexity. Splicing allows for the “word” of one gene to be broken up into many different ways to make many other genes. The word “lifetime” can be broken up into ‘life’ and ‘time,’ but can also be rearranged to make the words ‘fit,’ ‘lie,’ and ‘tile.’ The parts of protein-coding genes can be also be broken down and mixed and matched to produce different proteins. The sites for splicing are determined by the tightness of DNA, the accessibility to DNA, and other epigenetic factors that are still being actively researched.

An example of how alternative splicing can produce different protein products.

Dr. Jimena Guidice at the University of North Carolina at Chapel Hill is actively investigating the epigenetics of alternative splicing in the heart to try to determine why certain heart diseases cause the heart to revert back to fetal alternative splicing as opposed to adult alternative splicing. A few weeks postnatal, the muscle cells needed to contract the heart are not yet mature and have a different alternative splicing pattern to facilitate growth into adult muscle cells. Eventually, the muscle cells are spliced with a different alternative splicing pattern which is a mark of adult muscle cells since these cells are large and can pump blood to the heart more efficiently.

If you’re interested in reading more about epigenetics and its history, I highly recommend Nessa Carey’s Epigenetics Revolution and Siddhartha Mukherjee’s The Gene.

T cell-based therapies, or “living drugs” as coined by Dr. Carl June, utilize the potent killing activity of T cells, an arm of the immune system, to target cancers. In the early stages of T cell-based therapy, T cells were isolated from tumors, expanded ex vivo, selected for specific anti-tumor clones, and infused back into the patient. Nowadays, T cell products are genetically modified to express receptors to more specifically target cancers with better persistence in patients. So how are these “living drugs” manufactured?

Here at UNC, a Good Manufacturing Practices (GMP) facility housed off of NC-54 generates all the T cell products used in phase I/phase II clinical trials by the Lineberger Comprehensive Cancer Center. These facilities are regulated by the US Food and Drug Administration under the authority of the Federal Food, Drug, and Cosmetic Act.

Laboratory technicians working hard at UNC GMP.

I spoke with Paul Eldridge, PhD, Director of the UNC Lineberger Advanced Cellular Therapy Facility, to learn more about how GMP facilities work. Dr. Eldridge was recruited in 2014 by the Lineberger Comprehensive Cancer Center, which was interested in starting a cellular immunotherapy program and building a GMP facility. Dr. Eldridge’s personal interests are in chimeric antigen receptor T cells (CAR-Ts) and hematopoietic stem cells, with a focus in cancer immunotherapy.

An excerpt of our conversation is below, edited for clarity:

Lee Hong (LH): What products are manufactured at UNC’s GMP facility?

Paul Eldridge (PE): Here at UNC, we focus on advanced research products. The FDA divides cell products into minimal manipulation and more than minimal manipulation. Minimal manipulation essentially does not change the character of the cell, which means you can isolate, purify, or freeze the cells. More than minimal manipulation involves putting cells into tissue culture.

LH: Huh, why is that?

Tissue culture facility at UNC GMP.

PE: Well, when you put cells into culture, they are dividing, experiencing a different stimulus in the culture medium, and may differentiate into other cell types. In other words, anything that could potentially change the innate nature of the cell is considered more than minimal manipulation. Certainly gene manipulation would be included here as well. How you intend to use the cell products, what the FDA calls “homologous use,” also matters. If the investigator is intending to use the cells in a manner that it is not normally functioning (i.e. non-homologous use), the FDA kicks those products up to a higher regulatory environment and calls them more than minimal manipulation.

LH: So at UNC, are most of the advanced research products you work on derived from peripheral blood?

PE: Yes, we mainly manufacture CAR-T cells from peripheral blood. We are also working with another investigator, Shawn Hingten, who is using skin fibroblasts. Outside of UNC, other investigators are using adipose-derived stem cells or mesenchymal stem cells common in regenerative medicine.

Schematic of CAR-T cell synthesis using peripheral blood T cells

LH: How is UNC’s GMP facility set up?

PE: The facility is 5,000 square feet, with half of the space as clean room facilities. We have six separate processing rooms, five for patient samples and one with a different air system for virus protection. It’s a ISO-7 environment, meaning we use “bunny suits” and have to re-gown each time we enter or leave a room.

Patient rooms are positive air-pressured to the hallway in order to minimize anything coming back into the room. The air is 80% recirculated. In contrast, the virus room is negative pressure and the air is 100% single-pass filtered with no recirculation.

LH: Oh wow, there are a lot of details involved here.

PE: Yes, part of building a GMP facility is paying attention to construction details. We designed ours for an academic center, which is a different layout than that for pharmaceutical manufacturing.

LH: So how much of what you do in a GMP facility is automated versus manual?

PE: It really depends on where you are in product development. In our facility, we are focused on early phase I trials in which we have not nailed down a manufacturing process so we are pretty manual in most of our applications. Part of what we’re doing is learning how to manufacture the cells we need with minimal effort in a system that is as closed as possible.

As we move to phase II, then we start looking at scaling up due to the need for more cells. This is where bioreactors can be helpful and the steps become more automated. Cell therapy is where drug manufacturing was 75 years ago, in the sense that not much is automated. But nowadays, the technology is continually advancing. Miltenyi is offering a bioreactor called CliniMACS Prodigy that makes it sound as easy as pushing a button.

Katie McKay, Associate Director for Manufacturing, uses an inverted tissue culture microscope to count cells on a slide while working in the cell culture room at the UNC Lineberger Advanced Cellular Therapeutics Facility on June 16, 2017, in Chapel Hill.

LH: What sort of training and skill sets are needed for someone to work at a GMP facility?

PE: It breaks down into a couple of areas. One is whatever the process requires, in this case usually tissue culture (TC). We do a lot of TC as part of manufacturing cells. Another area is in regulated, quality control testing. We do a lot of characterization analysis on our cell products. We establish release specifications for every product we make so we have to do all the assays before patients can receive them. These assays aren’t necessarily done in the GMP facility, just wherever we can do it most easily.

The important thing is that all the assays need to be performed in a more GMP manner than you might encounter in a basic research lab. Documentation, Standard Operating Procedures (SOPs)…we do everything by SOPs. This is because we need to trace all of our materials, use everything within its expiration date, and keep up with instruments for calibration and maintenance. We also train people on site on whatever they’re doing, document the training, and ensure trainees maintain competency for quality control testing. In other words, we do all the tests you see in a typical research lab but in a more stringent, reproducible, and regulated manner.

Another important skill is learning how to work in a clean environment. Everyone thinks they know how to use a biological safety cabinet (BSC), but there are good ways and bad ways, and there are ways you have to operate when you are trying to minimize the risk of cross contamination. So we do a lot of cleaning, and we have to document everything we do.

In general, I don’t necessarily need someone with PhD credentials but I do need someone who is smart, dedicated, and extremely detail-oriented. We are looking more for personality and attitude than specific qualifications.

Katie McKay, Associate Director for Manufacturing, organizes supplies while working at the UNC Lineberger Advanced Cellular Therapeutics Facility on June 16, 2017, in Chapel Hill.

LH: How are careers and/or skills used at GMP facilities in academic centers different than in pharmaceutical companies?

PE: Well, in an academic center you see everything, and that’s the most enjoyable part of it. We will often break out into more research and development (R&D) work as opposed to hands-on, clean manufacturing work. People float back and forth between what they are comfortable doing. I have people with PhDs and high school degrees working in the facility.

From the industry side of things, a very different set of skills is needed. That’s because by the time you get to phase II/III clinical trials, the process is set and there is no situation where you’ll be making changes. Of course, you still need to have attention to detail and be thorough, but the important aspect is to follow the instructions and nothing else. However, when something doesn’t work, you’ll need enough wherewithal to understand whether it was a process accident or a random occurrence.

LH: Finally, where do you see cell therapy going in the next 25 years?

PE: It’ll become more rote, with more big pharma involved. The current model, as long as we are talking about autologous starting material (i.e. cells from the same patient), is not really scaled up so much as scaled out. There are still individual batches (or lots) made for individual patients. Where will we do this? It’s still not clear where it is economically advantageous to do so.

For example, Novartis has a centralized manufacturing facility [for Kymirah]. That works fine for now, but will Novartis keep up with material demands? It’s not just tissue culture media, they have to make lentiviral vector, and the suppliers right now can turn out product for only 25-30 patients at a time. No one has ever tried making 10,000 personalized products before. Moreover, the FDA requires lentiviral vectors to have a shelf life of couple years so vector suppliers are desperately trying to scale up. We are still in the early wild west stage but it is fascinating.

The best models of how our world works are incomplete. Though they accurately describe much of what Mother Nature has thrown at us, models represent just the tip of the full iceberg and a deeper understanding awaits the endeavoring scientist. Peeling back the layers of the natural world is how we physicists seek a deeper understanding of the universe. This search pushes existing technology to its limits and fuels the innovation seen in modern day nuclear and particle physics experiments.

This is a map of the SNOLAB facility. It’s 2 km (~1.2 miles) underground and is the deepest clean room facility in the world!

Today, many of these experiments search for new physics beyond the Standard Model,thetheoryphysicistshaveacceptedtodescribethebehaviorofparticles. Some physical phenomena have proven difficult to reconcile with the Standard Model and research seeks to improve understanding of those conundrums, particularly regarding the properties of elusive particles known as neutrinos which have very little mass and no electric charge, and dark matter, a mysterious cosmic ingredient that holds the galaxies together but whose form is not known. The experiments pursuing these phenomena each take a different approach toward these same unknowns resulting in an impressive diversity of techniques geared towards the same goal.

On one side of the experimental spectrum, the Large Hadron Collider smashes together high-energy protons at a rate of one billion collisions per second. These collisions could have the potential to create dark matter particles or spawn interactions between particles that break expected laws of nature. On the other side of the spectrum, there is a complimentary set of experiments that quietly observe their environments, patiently waiting to detect rare signals of dark matter and other new physical processes outside the realm of behavior described by the Standard Model. As the signals from the new physics are expected to be rare (~1 event per year as compared to the LHC’s billion events per second), the patient experiments must be exceedingly sensitive and avoid any imposter signals, or “background”, that would mimic or obscure the true signal.

The quest to decrease background interference has pushed experiments underground to cleanroom laboratories setup in mine caverns. While cleanrooms reduce the chances of unwanted radioactive isotopes, like radon-222, wandering into one’s experiment, mines provide a mile-thick shield from interference that would be present at the surface of Earth: particles called cosmic rays constantly pepper the Earth’s surface, but very few of them survive the long journey to an underground lab.

The rate at which muons, a cosmic ray particle, pass through underground labs decreases with the depth of the lab. At the SNOLAB facility, shown in the lower right, approximately one muon passes through a square centimeter of the lab every 100 years.

The form and function of modern underground experiments emerged from the collective insights and discoveries of the scientific community studying rare physical processes. As in any field of science, this community has progressed through decades of experimentation with results being communicated, critiqued, and validated. Scientific conferences have played an essential role in this process by bringing the community together to take stock of progress and share new ideas. The recent conference on Topics in Astroparticle and Underground Physics (TAUP) was a forum for scientists working to detect dark matter and study the properties of neutrinos. Suitably, the conference was held in the historic mining town of Sudbury, Ontario, home to the Creighton Mine, at the bottom of which lies SNOLAB, a world-class underground physics laboratory which notably housed the 2015 Nobel Prize winning SNO experiment. SNO, along with the Super-Kamiokande experiment in Japan’s Kamioka mine, was awarded “for the discovery of neutrino oscillations, which shows that neutrinos have mass.”

There is a natural excitement upon entering an active nickel mine, donning a set of coveralls, and catching a cage ride down into the depths; this was our entrance into the Creighton Mine during the TAUP conference. After descending an ear-popping 6800 feet in four minutes, we stepped out of the cage into tunnels— known as drifts— of raw rock. From there, we followed the path taken everyday by SNOLAB scientists, walking approximately one kilometer through the drifts to the SNOLAB campus. At SNOLAB, we prepared to enter the clean laboratory space by removing our coveralls, showering, and donning cleansuits. Inside, the rock walls are finished over with concrete and epoxy paint and we walked through well-lit hallways to a number of experiments which occupy impressively large caverns, some ~100 feet high.

Physicists visiting SNOLAB get a close-up view of the DEAP-3600 and MiniClean dark matter experiments. Shown here are large tanks of water that shield sensitive liquid argon detectors located within.

Our tour of SNOLAB included visits to several dark matter experiments, including DEAP-3600 and MiniClean, which attempt to catch the faint glimmer of light produced by the potential interaction of dark matter particles with liquid argon. A stop by PICO-60 educated visitors on another captivating experiment, which monitors a volume of a super-heated chemical fluid for bubbles that would indicate the interaction of a dark matter particle and a nucleus. The tour also included the SNO+ experiment, offering glimpses of the search for a rare nuclear transformation of the isotope tellurium-130; because this transformation depends on the nature of neutrinos, its observation would further our understanding of these particles.

SNOLAB is also home to underground experiments from other fields. The HALO experiment, for instance, monitors the galaxy for supernovae by capturing neutrinos that are emitted by stellar explosions; neutrinos may provide the first warnings of supernovae as they are able to escape the confines of a dying star prior to any other species of particle. Additionally, the REPAIR experiment studies the DNA of fish kept underground, away from the natural levels of radiation experienced by all life on the surface of Earth.

The search for rare signals from new physical phenomena pushed physicists far underground and required the development of new technologies that have been adapted by other scientific disciplines. The SNOLAB facility, in particular, has played a key role in helping physics revise its best model of the universe, and it can be expected that similar undergroundfacilitiesaroundtheworld will continue to help scientists of many stripes reveal new facets of the natural world.

In almost any field, particularly those in science and engineering, you encounter revolutionary technologies that promise faster, cheaper, and easier processes. Some of these advances, such as computers, social media, and smart technology, have changed the way an entire generation thinks and interacts with the world. What will be the next great breakthrough to transform the next generation? Many people believe it will be 3D printing. Continue reading →

Over 200 million years ago, a reptile, 11 feet long and 1500 pounds, was prowling about, likely feeling very pleased with himself. Not only did he have four crunchy creatures starting to digest in his stomach, but he had bitten another weakling in the neck and then crushed it under his left knee. Just at this moment of triumph, the reptile got stuck in the mud of ancient Jordan Lake, and slowly drowned.

Around the same time, by the seaside of what would one day become Italy, the forerunners to today’s oyster were nestling on the sea floor.

41 years ago, in 1976, Dr. Joe Carter obtained his PhD from Yale University and then drove down with his wife to start a new job at Chapel Hill’s geology department. He came as a sleuth for fossils. Ancient oysters, clams, mollusks, bivalves – Dr. Carter wanted to learn as much about them as he could.

Dr. Carter and one of his fossil replicas

For most of us, shells are just the violet-tinted, half-moon shaped spectacles that nip our feet at the beach.

But for Dr. Carter, these bivalves – and especially their fossils resurrected millions of years after they lived – are clues into evolutionary history.

In 1980, Dr. Carter took a trip to the mountains of northeastern Italy. There, he found an 80-year-old man who had been collecting Triassic fossils for decades – bivalves that lived 200 to 250 million years ago. The prospect of so many fossils was like coming upon a casket of jewels for Dr. Carter. The Italian man gave him a generous sampling of his fossil collection, and Dr. Carter fell to examining them.

“Well, this looks like an oyster,” Dr. Carter speculated, as he dwelt upon one of his fossils. Or was it? Oyster fossils dated back in time for 200 million years, beyond which they disappeared into the guarded slumber of the unwritten past. Scientists had assumed this marked the juncture at which oysters evolved, and as they cast about for a suitable ancestor, they decided upon scallops: both oysters and scallops have similar, non-pearly shells.

But perhaps the little Italian oyster told a whole new story. To investigate, Dr. Carter participated in a blatant case of disturbing the peace of the deceased. He took his Italian bivalve, sharpened his knife, and embarked on a long-delayed autopsy.

He dissected the defenseless fossil into impossibly tiny 150 micrometer slices. He examined each slice carefully under a microscope, then enlarged them on plastic drafting paper. Then, he had a “eureka” moment.

Today’s oysters are almost all calcite and non-pearly. But Dr. Carter’s ancient Triassic oyster had only a hint of calcite and it consisted mostly of mother-of-pearl. Could the mother-of-pearl oyster indicate that oysters evolved from “pearl oysters”, rather than from scallops?

Momentos from a long career

It was time to see if DNA could confirm the hint provided by the fossil record, a task given to Dr. Carter’s student, Kateri Hoekstra. She performed one of the first DNA analyses of living bivalves ever to focus on their evolutionary relationships. Just as the fossil record predicted, the DNA confirmed that the oyster from the Italian mountains, dug up after its rest of 221 million years, was a closer relative of pearl oysters than scallops.

Dr. Carter sent a letter to many natural history museums in Italy, asking them to find more of the mother-of-pearl oyster. But no one ever did. Still, Dr. Carter had fine pictures and drawings of the single known fossil. People started citing the fossil as UNC-13497b.

Such a clunky name would never do for the only mother-of-pearl oyster in the world, even if it did honor our great university. Dr. Carter finally christened it Nacrolopha carolae: Nacrolopha after the nacre (mother-of-pearl) in the Lopha-like oyster, and carolae after his wife, Carol Elizabeth.

This is the sweet side of invertebrate paleontology: a fine day in the Italian mountains, mother-of-pearl oysters, and suffusing the faint echoes of history with the name of your loved one.

But not everyone wants to give fossils their due attention.

The fossil record isn’t always perfect. For example, jellyfish rarely even leave fossils. For snails, the fossil record is misleading due to convergent evolution. The same features evolved in so many different snails that it’s hard to put things in order. You see the same shapes come up again and again.

As a result, many biologists have decided to send the fossil record packing. Since it doesn’t enlighten relationships for all groups of species, the idea that it might provide clues for a few is uncharted territory.

On the other side of the line-up, you have a handful of scientists, Dr. Carter, his former students, and his research colleagues among them. They are trying to convince the biologists that for some groups of species – especially bivalves – the fossil record is actually crucial.

It’s an uphill battle because, as Dr. Carter explains, the biologists have all the money. They are awash with government funds through the “Tree of Life” project that puts primary emphasis on DNA linkages between species.

Dr. Carter working in his lab.

Dr. Carter recognizes that DNA is a necessary tool. After all, it was Kateri’s DNA analysis that confirmed the origination of Nacrolopha carolae and modern oysters from pearl oysters. But it’s not the whole story. For example, DNA tells us that our closest relatives are the chimps. But that does not mean we evolved from them, or them from us! Fossils are the missing key that can shed light on the extinct creatures who filled in the evolutionary gaps.

Dr. Carter, along with David Campbell, his former student, now a professor at Gardner-Webb University, published a paper where they described how DNA and the fossil record can be used in symphony. Unfortunately, as Dr. Carter explains, “lots of people thought it was baloney.”

That reception is not stopping Dr. Carter. He and David Campbell are trying to publish a series of papers with examples of how DNA can give faulty evidence that the fossil record can correct. As Dr. Carter says, it will be interesting to see what the opposition says at that point.

Opposition aside, there’s one set of fossils that dazzles everyone – those of dinosaurs. Dr. Carter’s one foray into reptilian fossils happened by accident. Two of his students were studying a Durham quarry in 1994, when they came across the ankle bones of “a weird new guy”. It was the same unfortunate creature that, having filled his stomach with four prey, sank into a mudhole of ancient Jordan Lake and drowned just at its very moment of triumph. Digging it up, Dr. Carter and his students found hundreds of bones. Once cleaned and reassembled, it turned out to be a reptile shaped very much like a dinosaur, but not quite.

Dinosaurs roamed about on tiptoe, but this reptile’s foot walked on both toes and heels, like humans do. It was the best-preserved skeleton of this group of reptiles ever found. Dr. Carter toured museums in Europe and the US to make sure the reptile had not been named before.

Just thereafter, Karin Peyer walked into Dr. Carter’s office. She had an undergraduate degree in paleontology, a husband starting graduate school at UNC, and time on her hands. She asked: do you need any help?

“Boy, did you come at the right time!” Dr. Carter greeted her. Karin worked with Dr. Carter and experts from the Smithsonian Institution to formally describe and name the find.

They called it Postosuchus alisonae – alisonae a tribute to a friend of Dr. Carter’s who was dying of cancer at that time.

*******************

It was December 2015. In Dr. Carter’s large dim lab, filmy sheets of plastic drafting paper were ruffling in a soft breeze from the open window looking out on a hillside over Columbia Street. Sickle-shaped knives were stacked here and there, beside replicas of treasure from King Tut’s tomb. In between sectioning and sketching an ancient bivalve called Modiolopsis, Dr. Carter was packing.

Dr. Carter at his retirement party

He was retiring after 39 years. In practice, that merely means that Dr. Carter can now avoid going to faculty meetings. Otherwise, he can still serve on graduate student committees; he is coordinating the revisions of bivalves in the Treatise of Invertebrate Paleontology. He still has fossils to section and examine, and biologists to convince of the worth of the fossil record.

The only difference is, when Dr. Carter began his professorial work, it was just him and his wife, a young daughter and a baby boy. Now his daughter is 45 years old and his son is 39, and they both have their own families that Dr. Carter will be spending a lot of time with. It’s amazing what changes four decades can bring. But perhaps it’s easier to be philosophical and surrender to what’s ahead when you hold in your hands an oyster that lived 221 million years ago.

If a brain could talk, what would it say? Probably nothing profound or understandable. Rather, it would emit a bustling clamor of messages between neurons. These messages are delivered by chemicals called neurotransmitters. Different groups of neurotransmitters serve unique roles in the brain. Because there are so many chemicals sending so many messages, it’s often hard to know what each neurotransmitter group is saying. Scientists want to isolate the activity of individual neurotransmitter groups to better understand how these chemicals behave during both normal and abnormal behavior.

To decipher this chemical chatter, scientists use a technique called fast-scan cyclic voltammetry (FSCV). FSCV allows scientists to see real-time changes in neurotransmitter activity as an animal performs a behavior. It’s a way to correlate an individual’s behavior with a certain chemical’s activity. First, scientists insert an electrode into a brain area of interest. At the electrode’s tip is an incredibly thin carbon fiber — about the width of a strand of hair. Scientists then cycle the electrical potential at the carbon fiber tip up and down like a rollercoaster. This “cycling” helps the electrode detect specific neurotransmitters. As the electrical potential cycles up and down, the neurotransmitter is oxidized and reduced — a chemical process that can be measured as a current. The electrode is like a precise microphone, magnifying the murmur of a specific neurotransmitter’s activity amidst the din of other messengers.

The FSCV process.

Examining the Chatter

The shape of the electric current signal often differs between different neurotransmitters, allowing scientists to tell them apart. Computer programs then convert this electric current into a concentration that scientists can measure. Translating the oxidation/reduction reaction to current is much like translating foreign speech into a language you understand, and the shape of the current is much like an accent – an identifying feature of that language.

Once an electrode is in place and this translation process begins, scientists can examine how a neurotransmitter’s signal changes during certain behaviors, such as decision making. Afterward, sophisticated analyses measure the shape of the current signal to ensure that it is actually from a neurotransmitter, and not some other eavesdropping molecule or contaminant.

Why FSCV?

Dopamine is one of the neurotransmitters that FSCV can measure.

FSCV is incredibly useful in that it can specifically isolate and measure transmitters such as dopamine and serotonin. This is important because in many brain diseases, the chemical chatter of these neurotransmitters goes awry. For instance, dopamine deficiency is a symptom of Parkinson’s Disease, and schizophrenia often features imbalances in dopamine levels. Abnormalities in serotonin have classically been linked to depression (although that is being contested) as well as Sudden Infant Death Syndrome (SIDS).

Despite all this prior knowledge, scientists still don’t know exactly how these neurotransmitters contribute to different behaviors and diseases. As such, FSCV is a technique that furthers our understanding of how neurotransmitters behave. Scientists can use FSCV to examine how neurotransmitters function, both normally as well as in disease states, to help unlock the secrets of treating brain diseases.

Congratulations to Dr. Margaret Scarry! A longstanding faculty member of the Anthropology Department at UNC-CH, Dr. Scarry was recently promoted to the Director of the Research Labs of Archaeology (RLA) and Chair of the Curriculum in Archaeology. Having received her PhD from the University of Michigan in 1986, Dr. Scarry has since garnered professional renown for her research on the cultural, social and economic practices relating to the production and consumption of food. Specifically, she explores such foodways of the late prehistoric and early historic peoples of the southeastern United States by using archaeobotanical data.

For those who are not familiar, the UNC RLA’s primary mission is to enhance knowledge of the archaeology and history of the ancient southeastern United States, but broadly offers support for both student and faculty archaeologists in classics, religious studies, linguistics, and gender studies in addition to anthropology. The RLA curates vast archaeological collections meanwhile supporting graduate student and faculty research in the southeastern United States and abroad. Most importantly, this mission is constantly expanding to encourage archaeologists who work abroad–from Dr. Patricia McAnany’s participatory research in the Maya region of the Yucatán Peninsula to Dr. Silvia Tomášková’s research on the stone engravings of South Africa. This collaborative and interdisciplinary tenet of the RLA is also apparent in the Curriculum in Archaeology. Although housed in the RLA, it was first created through a working group of archaeologists across disciplines who felt their diverse approaches to archaeology offered a strong and unique curriculum for undergraduate study.

I had the chance to sit down with Dr. Scarry recently to speak about her new roles and what’s in store for the future.When I asked Dr. Scarry about her plans for the RLA, she responded with equal parts excitement and pride. “I have a fantastic group of collegial and enthusiastic people who work with me,” she says. Just having received an external review last year, both the Curriculum and the RLA were heralded as “gems,” but are still relatively unknown on campus and in the general public. As a result, Dr. Scarry mentions, “one of the things I want to do is grow our reputation so that we are more visible.” This visibility will not only strengthen “the ties amongst archaeologists across campus” but also create a place for both graduate students and faculty members to succeed.

Dr. Scarry is also immensely proud of the RLA’s strong relationship with Native American communities, both on campus and more broadly. “We’ve tried to be a leader and a partner, to be sensitive to the political and ethical issues of the conjunction of archaeology and Native American concerns.” She thinks it is imperative to continue to foster these relationships, and is actively seeking out opportunities with other RLA faculty members to develop similar relationships with other communities worldwide.

Further, Dr. Scarry is aiming to expand the technological resources of the RLA available to student researchers. “We have a current initiative to work on 3D imaging and virtual reality and we hope to increase our computing capacity for that,” she says. Ultimately, Dr. Scarry says, “we encourage people to see who we are. I’d like for [the RLA] to be a home where people can get involved.”

As a graduate student associated with the RLA, I can agree with Dr. Scarry when she says “we value the students here. We have such a great community because our students push each other, not out of competition, but because there is a synergism, and we want to see each other succeed.” If you would like to learn more, click here.

Special thanks to Dr. Scarry for speaking with me. Peer edited by Suzannah Isgett and Alissa Brown.

Halloween is a time of year when we hanker for the horrific, ogle at the ugly, and revel in the rotten. And in this election year, we’re just as likely to overhear conversations about repugnant costumes (like gory zombies or bloody brides) as we are comments on disgusting (or “nasty”?) politicians. Continue reading →

“9-1-1 … Hello! Somebody just collapsed on MLK Road near the Root Cellar café,” said a bystander as he rushed towards the man lying still in the parking lot. A second young man on the scene was already performing CPR.

The bystander asked the young man, “What just happened? I saw him walking a few minutes ago and he seemed fine.”

The young man responded with a broken voice: “Andrew is our neighbor. I don’t know what happened. I found him unresponsive and lying on the ground beside his car door.”

This unfortunate scenario is most commonly attributed to sudden cardiac (heart) death, which is defined by the World Health Organization as a death occurring within an hour of symptom onset or an unexpected death within 24 hours of having been seen alive. This definition is restricted by timing and manner of death, and also assumes that death within this time frame is due to a cardiac event. This often results in the mischaracterization of non-cardiac incidents of sudden death. Seeking to address these limitations in the examination of sudden death, both due to cardiac and noncardiac causes, physician scientists in the Division of Cardiology at the University of North Carolina at Chapel Hill Hospital initiated a project studying all causes of out-of-hospital sudden unexpected death in North Carolina. This initiative is called theSUDDEN Project.

What is the SUDDEN Project?

Sudden death is a term encompassing all causes of out-of-hospital sudden unexpected death (SUD). The SUDDEN project, led by principal investigator Ross J Simpson Jr., was piloted in Wake County, North Carolina in March 2013. All natural deaths, regardless of underlying cause, were studied. “Unexpectedness” was determined based on the location of death, manner of death, and medical history in persons aged 18 to 64. Victims who had an unexpected death were characterized based on their demographic and medical disease profile to understand the risk factors. As our understanding advances regarding sudden deaths, we can begin to understand risk factors for both cardiac and non-cardiac deaths. The initiative has now expanded to more than 10 counties in North Carolina and a few counties in South Carolina.

Why is it important to study out-of-hospital sudden unexpected death?

Previously, sudden death had been considered to be an unpredictable first sign of an underlying disease. However, Lewis M.E., et.al , described out-of-hospital SUD as a syndrome with a hefty price tag, with 1 out of 3 victims characterized as high utilizers of the health care system. These individuals die not only due to cardiac events such as heart attacks, but also due to worsening chronic diseases including diabetes, hypertension, obesity, lung disease, and kidney disease. Investigators described risk factors and medical disease profiles of out-of-hospital SUD victims by different age groups. All of these diseases decrease quality of life for their victims, and this deterioration is primarily due to frequent hospitalizations, clinic visits, and emergency department visits.Sudden death victims also can suffer from unaddressed mental health issues, exacerbating the management of their baseline chronic disease. Out-of-hospital SUD is not only an individual burden, but also a familial and a societal burden. This loss of life incurs countless unforeseen medical and end-of-life expenses. However, these costs do not begin to compare to the emotional impact on the families of those who die suddenly. These economic and emotional burdens could beminimized by identification and treatment of underlying diseases that eventually cause sudden death. Identifying risk factors to predict SUD victims could allow for timely interventions and prevent their death.

People who die from SUD also suffer numerous health problems.

Future of out-of-hospital sudden unexpected death prevention

Characterizing the profiles of out-of-hospital SUD victims will lead to the development of strategies focused on intervention planning. This will identify and treat the potential out-of-hospital SUD victims. In order to develop community-based paramedicine programs, SUDDEN has developed close collaborations with the Gillings School of Public Health, the ODUM Institute, and the Eshelman School of Pharmacy at UNC, as well as the Emergency Medical Services of several counties. They have also developed national collaborations with both the Environmental Protection Agency and the Center for Disease Control. These collaborations will aim to build a framework of targeted prevention plans through meaningful partnerships with community health partners across North Carolina and the United States to provide adequate access to health care resources and social support networks.The SUDDEN Project is fortunate enough to welcome students from multiple educational backgrounds to collaborate on their ongoing projects. For general inquiries and/or questions, please email sudden@med.unc.edu.

This summer, an afternoon spent flying kites at the beach will be just another day at work for some researchers at the University of North Carolina at Chapel Hill.

Now that classes are out of session, Elsemarie deVries and Evan Goldstein don their sandals and sunscreen haul kites and cameras to the Outer Banks in the name of science. They are coastal geologists who study how beach grass and sand dunes affect each other.

These researchers are part of a “21st-century renaissance” of scientists who use kites to collect geographic information. A camera cradled beneath a kite snaps a collection of what Goldstein calls “higglety-pigglety images” from all over the beach. Once they get back to the lab, the team uses software to stitch these pictures together into a 3D map. With enough data, they hope to understand how dunes grow.

To attach real dimensions to their maps, the researchers roam around on the beach with a GPS unit, noting the coordinates of specific locations. In a pinch, anything can be a ground control point – on one memorable day, dog-poo bags marked the GPS locations (although deVries is quick to point out that the baggies contained only sand – not feces).

Although many people find the beach a relaxing place, Goldstein says that some of these trips to the coast have actually been “pretty stressful.” Particularly windy weather can sour a field excursion since strong wind can send the kite into a nosedive. To solve this problem, Goldstein dove headfirst into the physics of kite flying literature (yes, that exists), and the team picked up a more stable kite with a keel.

Evan Goldstein snaps a selfie with one of the airborne cameras. Both images courtesy of Evan Goldstein.

Now that they know how to capture these bird’s eye images and turn them into topographic maps, Goldstein is setting his sights on “capturing time series – going back to the same site repeatedly over and over again.” They hope that building a series of 3D maps will show them how plants and dunes change together.

Taking pictures with kites instead of, say, drones, which are increasingly used for aerial photography, may seem delightfully quirky and old-fashioned, but cost and legality make kites an appealing option. Even though the kind of kites able to support a camera cost a little more than tuppence for paper and string, they can still be less pricy than drones. Goldstein also points to “the regulatory advantage” as a key reason that kites will be keeping this research aloft in the upcoming months.

Author’s note: Although I am not involved in the kite mapping project, I am a MS student in the same lab as Dr. Evan Goldstein and PhD candidate Elsemarie deVries, under the direction of principal investigator Dr. Laura Moore. Dr. Kenneth Ells of UNC-Wilmington is an additional collaborator on this project.