Thursday, October 21, 2010

Paleontologists have unearthed the first extinct penguin with preserved evidence of scales and feathers. The 36-million-year-old fossil from Peru shows the new giant penguin's feathers were reddish brown and grey, distinct from the black tuxedoed look of living penguins.

The new species, Inkayacu paracasensis, or Water King, was nearly five feet tall or about twice the size of an Emperor penguin, the largest living penguin today.

"Before this fossil, we had no evidence about the feathers, colors and flipper shapes of ancient penguins. We had questions and this was our first chance to start answering them," said Julia Clarke, paleontologist at The University of Texas at Austin's Jackson School of Geosciences and lead author of a paper on the discovery in the Sept. 30 online edition of the journal Science.

The fossil shows the flipper and feather shapes that make penguins such powerful swimmers evolved early, while the color patterning of living penguins is likely a much more recent innovation.

Like living penguins and unlike all other birds, Inkayacu's wing feathers were radically modified in shape, densely packed and stacked on top of each other, forming stiff, narrow flippers. Its body feathers had broad shafts that in living penguins aid streamlining the body.

Bird feathers get some of their colors from the size, shape and arrangement of nanoscale structures called melanosomes. Matthew Shawkey and Liliana D'Alba, coauthors at the University of Akron, compared melanosomes recovered from the fossil to their extensive library of those from living birds to reconstruct the colors of the fossil penguin's feathers.

Melanosomes in Inkayacu were similar to those in birds other than living penguins, allowing the researchers to deduce the colors they produced. When the team looked at living penguins, they were surprised to find their colors were created by giant melanosomes, broader than in the fossil and in all other birds surveyed. They were also packed into groups that looked like clusters of grapes.

Why, the researchers wondered, did modern penguins apparently evolve their own special way to make black-brown feathers?

The unique shape, size and arrangement of living penguin melanosomes would alter the feather microstructure on the nano and micro scale, and melanin, contained within melanosomes, is known to give feathers resistance to wear and fracturing. Perhaps, the researchers speculate, these shifts might have had more to do with hydrodynamic demands of an aquatic lifestyle than with coloration. Penguin colors may have shifted for entirely different reasons related to the later origin of primary predators of extant penguins such as seals or other changes in late Cenozoic seas.

"Insights into the color of extinct organisms can reveal clues to their ecology and behavior," said co-author Jakob Vinther at Yale University, who first noted fossil preservation of melanosomes in bird feathers. "But most of all, I think it is simply just cool to get a look at the color of a remarkable extinct organism, such as a giant fossil penguin."

Inkayacu paracasensis (een-kah-yah-koo par-ah-kah-sin-sis) was discovered by Peruvian student Ali Altamirano in Reserva Nacional de Paracas, Peru. Inkayacu's body length while swimming would have been about 1.5 meters (five feet), making it one of the largest penguins ever to have lived. When the team noticed scaly soft tissue preserved on an exposed foot, they nicknamed it "Pedro" after a sleazy or "escamoso" (scaly) character from a Colombian telenovela.

The latest discoveries add to earlier work by Clarke and her colleagues in Peru that challenges the conventional vision of early penguin evolution. Inkayacu and other finds show there was a rich diversity of giant penguin species in the late Eocene period (about 36 to 41 million years ago) of low-latitude Peru.

"This is an extraordinary site to preserve evidence of structures like scales and feathers," said Clarke. "So there's incredible potential for new discoveries that can change our view of not only penguin evolution, but of other marine vertebrates."

The fossil is part of the permanent collection of the Museo de Historia Natural-UNMSM in Lima. An exhibit by expedition co-leader Rodolfo Salas about the fossil will open at the Reserva Nacional de Paracas in November.

New research funded through The University of Texas at Austin provides evidence of the vicious cycle created when an obese individual overeats to compensate for reduced pleasure from food.

Obese individuals have fewer pleasure receptors and overeat to compensate, according to a study by University of Texas at Austin senior research fellow Eric Stice and his colleagues published in The Journal of Neuroscience. Stice shows evidence this overeating may further weaken the responsiveness of the pleasure receptors ("hypofunctioning reward circuitry"), further diminishing the rewards gained from overeating.

Food intake is associated with dopamine release. The degree of pleasure derived from eating correlates with the amount of dopamine released. Evidence shows obese individuals have fewer dopamine (D2) receptors in the brain relative to lean individuals and suggests obese individuals overeat to compensate for this reward deficit.

People with fewer of the dopamine receptors need to take in more of a rewarding substance — such as food or drugs — to get an effect other people get with less.

"Although recent findings suggested that obese individuals may experience less pleasure when eating, and therefore eat more to compensate, this is the first prospective evidence to show that the overeating itself further blunts the award circuitry," said Stice, a senior scientist at Oregon Research Institute, a nonprofit, independent behavioral research center. "The weakened responsivity of the reward circuitry increases the risk for future weight gain in a feed-forward manner. This may explain why obesity typically shows a chronic course and is resistant to treatment."

Using Functional Magnetic Resonance Imaging (fMRI), Stice's team measured the extent to which a certain area of the brain (the dorsal striatum) was activated in response to the individual's consumption of a taste of chocolate milkshake (versus a tasteless solution). Researchers tracked participants' changes in body mass index over six months.

Results indicated that those participants who gained weight showed significantly less activation in response to the milkshake intake at the six month follow-up relative to their baseline scan and relative to women who did not gain weight.

"This is a novel contribution to the literature because, to our knowledge, this is the first prospective fMRI study to investigate change in striatal response to food consumption as a function of weight change," said Stice. "These results will be important when developing programs to prevent and treat obesity."

The research was conducted at the The University of Oregon brain imaging center. Stice's co-authors include Sonja Yokum, a former post-doctoral fellow at The University of Texas at Austin.

Stice has been studying eating disorders and obesity for 20 years. This research has produced several prevention programs that reliably reduce risk for onset of eating disorders and obesity.

If you need an excuse to turn in early, results of University at Buffalo research provides a good one.

A study published online ahead of print in the Annals of Epidemiology shows that people who slumber less than six hours a night during the work week were three times more likely to have elevated levels of blood sugar than those who slumber six-to-eight hours.

"This research supports growing evidence of the association of inadequate sleep with adverse health issues," says the study's first author Lisa Rafalson, PhD, a National Research Service Award (NRSA) Fellow in the UB Department of Family Medicine, UB School of Medicine and Biomedical Sciences.

Rafalson now is on the faculty of D'Youville College in Buffalo, with volunteer appointments in UB pediatrics, family medicine and social and preventive medicine departments.

"To our knowledge no other studies have reported specifically on the association of sleep and impaired fasting glucose, but our estimates are in line with studies that examined the association of sleep, impaired glucose tolerance and type 2 diabetes.

"Impaired fasting glucose – a glucose level between 100–125 mg/dl is known as pre-diabetes, and about 25 percent of people who have impaired fasting glucose will, at some point, develop type 2 diabetes," Rafalson says.

The researchers suggest several mechanisms that could play a role in the relationship of sleep loss and increased glucose levels, noting that previous experiments have shown that severely restricting sleep can result in an increase in appetite-stimulating hormones and a decrease in hormones that inhibit appetite, as well as increased hunger for calorie-dense foods.

Other mechanisms that may be in play involve the sympathetic nervous system, as well as an increase in inflammatory factors, which are well known to increase diabetes risk, the study reports.

Rafalson's findings are based on data from a six-year follow-up of participants who initially took part in the Western New York Health Study, conducted from 1996 to 2001 by UB's Department of Social and Preventive Medicine.

The 91 persons with normal fasting glucose levels at baseline who developed pre-diabetes by their follow-up exam were matched with study participants who had maintained normal glucose levels.

Participants were placed into three groups based on the their Sunday through Thursday average daily amount of sleep: "short-sleepers," who reported less than six hours of sleep nightly; "long-sleepers," who reported sleeping more than eight hours nightly; and a reference group who slept six-to-eight hours a night.

Results show that "short-sleepers" had a significantly increased risk of progressing from normal glucose levels to pre-diabetes, compared to those who slept six-to-eight hours nightly.

The study found that sleeping an average of more than eight hours a night also increased the likelihood of developing increased fasting glucose, consistent with other literature, but the findings weren't statistically significant, possibly because only five people were in that group, Rafalson noted.

"A high glucose level is associated with many complications, such as heart disease and premature death," says Rafalson. "Physicians should discuss sleep habits with their patients, along with other lifestyle issues that are important to long-term health, such as diet and exercise."

Supercomputer simulations at the Department of Energy's Oak Ridge National Laboratory are helping scientists unravel how nucleic acids could have contributed to the origins of life.

A research team led by Jeremy Smith, who directs ORNL's Center for Molecular Biophysics and holds a Governor's Chair at University of Tennessee, used molecular dynamics simulation to probe an organic chemical reaction that may have been important in the evolution of ribonucleic acids, or RNA, into early life forms.

Certain types of RNA called ribozymes are capable of both storing genetic information and catalyzing chemical reactions - two necessary features in the formation of life. The research team looked at a lab-grown ribozyme that catalyzes the Diels-Alder reaction, which has broad applications in organic chemistry.

"Life means making molecules that reproduce themselves, and it requires molecules that are sufficiently complex to do so," Smith said. "If a ribozyme like the Diels-Alderase is capable of doing organic chemistry to build up complex molecules, then potentially something like that could have been present to create the building blocks of life."

The research team found a theoretical explanation for why the Diels-Alder ribozyme needs magnesium to function. Computational models of the ribozyme's internal motions allowed the researchers to capture and understand the finer details of the fast-paced reaction. The static nature of conventional experimental techniques such as chemical probing and X-ray analysis had not been able to reveal the dynamics of the system.

"Computer simulations can provide insight into biological systems that you can't get any other way," Smith said. "Since these structures are changing so much, the dynamic aspects are difficult to understand, but simulation is a good way of doing it."

Smith explained how their calculations showed that the ribozyme's internal dynamics included an active site, or "mouth," which opens and closes to control the reaction. The concentration of magnesium ions directly impacts the ribozyme's movements.

"When there's no magnesium present, the mouth closes, the substrate can't get in, and the reaction can't take place. We found that magnesium ions bind to a special location on the ribozyme to keep the mouth open," Smith said.

Algae and photosynthetic bacteria hold a hidden treasure – fat molecules known as lipids – which can be converted to renewable biofuels. Such microorganisms offer an attractive alternative to the unsustainable use of petroleum-based fossil fuels, as well as biofuel sources requiring arable cropland.

Cyanobacteria are capable of producing around 15,000 gallons of biofuel per acre – roughly 100 times that of plant or forest products including corn or switchgrass – and require only simple nutrients, sunlight and CO2 for growth.

But prying out the cellular ingredients needed for biofuels has so far come at a steep price, both economically and environmentally. Chemicals traditionally used in the process are extremely toxic.

Graduate researcher Jie Sheng and his colleagues at Arizona State University’s Biodesign Institute, have been exploring new methods for performing lipid extraction by less harmful means. Under the guidance of Bruce Rittmann, director of Biodesign’s Center for Environmental Biotechnology, the team successfully tested several formulas that recover lipid with high efficiency. The group’s results appear in the current issue of Bioresource Technology.

The two best candidates for photosynthetic biofuel production – algae and cyanobacteria – may be readily refined to produce a range of green gasolines, diesels, and other biofuels. But as Sheng notes, cyanobacteria offer several crucial advantages as a lipid source. “Cyanobacteria, particularly the strain we use, (known as Synechocystis) are very simple and have been fully sequenced genetically, so that we can easily modify them.” Such genetic re-tooling would allow the quantity and quality of lipid production for biofuel to be optimized.

Further, unlike algae, which must be subjected to conditions of stress to maximize their lipid output, cyanobacteria are most successfully cultured under conditions of optimal growth, so that high-density lipid production is paired with a high rate of biomass production. “When the cell is provided with happy conditions for growth, we are able to get much more lipid out,” Sheng says.

But gathering the valuable lipids from cyanobacteria first requires the disruption of a tough, protective membrane. A half-century ago, Jordi Folch, a pioneering neurochemist, developed a method that remains the gold standard for isolating lipids from cells. The Folch method, as it is commonly known, has also been used by researchers to extract lipid from algae and cyanobacteria.

The technique involves the use of methanol and chloroform, which eat away and dissolve the lipids in a cell’s protective membrane, so that lipids may be harvested. For biofuel production, the Folch method is not practical, as large quantities of chloroform would wreak havoc on the environment and human health. (Chloroform, once a popular anesthetic, is categorized as a B2 chemical by the U.S. EPA – possibly carcinogenetic.)

Nevertheless, breaking down the durable thylakoid membrane of Synechocystis to get at the valuable lipids is not an easy task. As Sheng explains, alternate, less toxic chemicals have been used with success to extract lipid from algae, including ethanol, isopropanol, butanol, methyl tert-butyl ether (MTBE), acetic acid esters, hexane, and various combinations of these, but their viability for use with cyanobacteria was uncertain. Sheng’s team wanted to test such chloroform-free methods, to see if they could be used to extract lipids from Synechocystis.

In addition to the challenge of penetrating the more robust cyanobacterial cell membrane, Sheng notes that the lipids found in cyanobacteria are distinct from the lipids found in algae, vegetable and animal tissue, and the extraction methods may not work. In a series of experiments, the team first demonstrated that the Folch method, as well as a closely related technique (Bligh & Dyer), were the most efficient means of lipid extraction for Synechocystis. Electron microscopy imaging showed effective penetration of the cell membrane and the ability to extract cyanobacterial lipids with high specificity.

The results closely matched the predictions the group had made through molecular modeling of the process. In contrast, ethanol, isopropanol, butanol, acetic ester, hexane, and combinations of these chemicals were significantly less effective in recovering lipid and were less specific, also recovering more impurities.

Intriguingly, the combination of methanol and MTBE showed high efficiency in cell penetration and lipid recovery, roughly comparable to the Folch and Bligh & Dyer methods. The group believes the combination of methanol and MTBE, which allows for the reduction of methanol and eliminates chloroform, may lower the toxicity and environmental impact of the extraction process.

The research was carried out through the collaborative efforts of biologists and engineers, and continuing work in conjunction with other teams at the Biodesign Institute will explore mutant strains of Synechosystis boasting much higher lipid yields than the wild variety used in these experiments.

Further studies, sponsored by Department of Energy, involve genetically modifying Synechosystis so that the portion of the lipid refined into biofuel – the fatty acids – may be directly secreted through the cell wall. These and other ongoing efforts are helping to advance biofuel production from benchtop to eventual commercialization.

Geologists have found evidence that some 55 million years ago a river as big as the modern Colorado flowed through Arizona into Utah in the opposite direction from the present-day river. Writing in the October issue of the journal Geology, they have named this ancient northeastward-flowing river the California River, after its inferred source in the Mojave region of southern California.

Lead author Steven Davis, a post-doctoral researcher in the Department of Global Ecology at the Carnegie Institution, and his colleagues discovered the ancient river system by comparing sedimentary deposits in Utah and southwest Arizona. By analyzing the uranium and lead isotopes in sand grains made of the mineral zircon, the researchers were able to determine that the sand at both localities came from the same source -- igneous bedrock in the Mojave region of southern California.

The river deposits in Utah, called the Colton Formation by geologists, formed a delta where the river emptied into a large lake. They are more than 400 miles (700 kilometers) to the northeast of their source in California. "The river was on a very similar scale to the modern Colorado-Green River system," says Davis, "but it flowed in the opposite direction." The modern Colorado River's headwaters are in the Rocky Mountains, flowing southeast to the river's mouth in the Gulf of California.

The deposits of the Colton Formation are approximately 55 million years old. Recently, other researchers have speculated that rivers older than the Colorado River may have carved an ancestral or "proto" Grand Canyon around this time, long before Colorado began eroding the present canyon less than 20 million years ago. But Davis sees no evidence of this. "The Grand Canyon would have been on the river's route as it flowed from the Mojave to Utah," he says. "It stands to reason that if there was major erosion of a canyon going on we would see lots of zircon grains from that area, but we don't."

The mighty California River likely met its end as the Rocky Mountains rose and the northern Colorado Plateau tilted, reversing the slope of the land surface and the direction of the river’s flow to create the present Colorado-Green River system. Davis and his colleagues have not determined precisely when the change occurred, however. "The river could have persisted for as long as 20 million years before the topography shifted enough to reverse its flow," he says.

With the help of tiny, see-through fish, Stanford University School of Medicine researchers are homing in on what happens in the brain while you sleep. In a new study, they show how the circadian clock and sleep affect the scope of neuron-to-neuron connections in a particular region of the brain, and they identified a gene that appears to regulate the number of these connections, called synapses.

“This is the first time differences in the number of synapses between day and night and between wake and sleep have been shown in a living animal,” said Lior Appelbaum, PhD, co-first author of the study, which appears in the Oct. 6 issue of Neuron. He said further studies using the imaging method he and his colleagues developed could shed more light on how our brain activities vary according to time of day.

Appelbaum, who is now a principal investigator in a lab at Bar-Ilan University in Israel, spent five years conducting the work while in the lab of Emmanuel Mignot, MD, PhD, professor of psychiatry and behavioral sciences. Mignot, who also directs the Stanford Center for Sleep Sciences and Medicine, is co-senior author of the paper; the other first author is Gordon Wang, PhD, a postdoctoral scholar in molecular and cellular physiology.

Why we need to sleep and how, exactly, sleep is restorative are two big, unanswered questions in biology. Knowing that brain performance changes throughout the day, researchers believe that daily cycles and sleep regulate “synaptic plasticity” — the ability of synapses to change strength and even form and erase. And they theorize that nighttime changes in the number and strength of synapses help recharge the brain which, in turn, benefits memory, learning and other functions.

As the researchers note in their paper, daily cycle-related changes in the number of neuron-to-neuron connections hadn’t previously been shown in a living vertebrate, and the “molecular mechanisms of this type of synaptic plasticity are poorly understood.” So they turned to the zebrafish, a small aquarium pet, for help.

Like humans, zebrafish are active during the day and sleep at night — something that researchers in Mignot’s lab discovered in previous research. Larvae of the handy little fish also happen to be transparent, enabling researchers to look directly at the animal’s neuronal network. “This can’t be done in any other vertebrate animal,” said Mignot, who is also the Craig Reynolds Professor of Sleep Medicine, adding that his group was aided by the imaging expertise of co-author Stephen Smith, PhD, professor of molecular and cellular physiology, and his lab.

For this study, the researchers used a fluorescence-imaging technique to monitor neural activity in the specific region of the brain that regulates sleeping and waking. With their technique, they were able to watch synapses within individual hypocretin neurons, and they showed that the number of these connections fluctuated between day and night.

Appelbaum noted this is the first time rhythmic changes in synapse numbers have been observed in the brain of a living vertebrate. The work also, Mignot said, further demonstrates the brain’s ability to reorganize and adapt to changes. “It gets ready for new activity by telling the neurons they have to shut down synapses during this time of day but increase them at other times of the day,” he said.

The researchers determined that the differing number of synapses between day and night was primarily regulated by the body’s internal clock but was also affected by behavior — for instance, how much sleep the fish got. They also identified a gene, NPTX2b, that appears to be involved in regulating the rhythmic changes in synapses. “It’s one actor in an unknown mechanism,” said Appelbaum, explaining it’s unlikely that only one gene is involved, but its identification gets researchers that much closer to understanding the process.

Appelbaum said he considers the imaging method itself one of the strongest points of the paper, and by using the technique developed in this study, investigators can image synaptic plasticity in other neuronal systems — circuits — of the zebrafish to expand on these findings. “With these techniques, we can look at other areas of the brain, such as the one in charge of memory, to see how sleep cycles affect synapses,” he said, adding that he doesn’t expect to see the same results in every part of the brain.“Those changes are likely circuit-dependent,” he explained, saying that synaptic plasticity in memory circuits might prove to be more affected by behavior, such as sleep, than by the circadian clock (the opposite of what was found in hypocretin neurons). Knowing this, he said, could help identify which regions of brain are most affected by waking and sleeping and further uncover what happens when we slumber.

Sensation seeking—the urge to do exciting things—has been linked to dopamine, a chemical that carries messages in your brain. For a new study published in Psychological Science, a journal of the Association for Psychological Science, scientists analyzed genes in the dopamine system and found a group of mutations that help predict whether someone is inclined toward sensation seeking.

Sensation seeking has been linked to a range of behavior disorders, such as drug addiction. It isn’t all bad, though. “Not everyone who’s high on sensation seeking becomes a drug addict. They may become an Army Ranger or an artist. It’s all in how you channel it,” says Jaime Derringer, a PhD student at the University of Minnesota and the first author of the study. She wanted to use a new technique to find out more about the genetics of sensation seeking. Most obvious connections with genes, like the BRCA gene that increases the risk for breast cancer, have already been found, Derringer says. Now new methods are letting scientists look for more subtle associations between genes and all kinds of traits, including behavior and personality.

Derringer used a kind of mutation in DNA called a single-nucleotide polymorphism, or SNP. A SNP is a change in just one “letter” of the DNA. She started by picking eight genes with various roles related to the neurotransmitter dopamine, which has been linked to sensation seeking in other studies. She looked at group of 635 people who were part of a study on addiction. For each one, she had genetic information on 273 SNPs known to appear in those 8 genes and a score for how much they were inclined to sensation seeking. Using that data, she was able to narrow down the 273 SNPs to 12 potentially important ones. When she combined these 12 SNPs, they explained just under 4 percent of the difference between people in sensation seeking. This may not seem like a lot, but it’s “quite large for a genetic study,” Derringer says.

It’s too soon to go out and start screening people for these mutations; not enough is known about how genes affect behavior. “One of the things we think is most exciting about this isn’t necessarily the story about dopamine and sensation seeking,” says Derringer. “It’s rather the method that we’re using. We used a sample of 635 people, which is extremely small, and we were still able to detect a significant effect. That’s actually quite rare in these studies.” She said the same method could be used to look at the link between biology and other behaviors—dopamine and cocaine dependence, for example, or serotonin and depression.

Eventually these methods could lead to tests that might help predict whether someone is likely to have problems later, and whether there should be early intervention to guide them down a healthier path.

A new species of dinosaur discovered in Arizona suggests dinosaurs did not spread throughout the world by overpowering other species, but by taking advantage of a natural catastrophe that wiped out their competitors.

Tim Rowe, professor of paleontology at The University of Texas at Austin's Jackson School of Geosciences, led the effort to describe the new dinosaur along with co-authors Hans-Dieter Sues, curator of vertebrate paleontology at the National Museum of Natural History in Washington, D.C., and Robert R. Reisz, professor and chair of biology at the University of Toronto. The description appears in the online edition of the journal Proceedings of the Royal Society B on Oct. 6.

Sarahsaurus, which lived about 190 million years ago during the Early Jurassic Period, was 14 feet long and weighed about 250 pounds. Sarahsaurus was a sauropodomorph, a small but closely related ancestor to sauropods, the largest land animals in history.Conventional wisdom says that soon after dinosaurs originated in what is now South America, they rapidly spread out to conquer every corner of the world, so smart and powerful they overwhelmed all the animals in their path. Sarahsaurus challenges that view.

One of the five great mass extinction events in Earth's history happened at the end of the Triassic Period 200 million years ago, wiping out many of the potential competitors to dinosaurs. Evidence from Sarahsaurus and two other early sauropodomorphs suggests that each migrated into North America in separate waves long after the extinction and that no such dinosaurs migrated there before the extinction.

"We used to think of dinosaurs as fierce creatures that outcompeted everyone else," said Rowe. "Now we're starting to see that's not really the case. They were humbler, more opportunistic creatures. They didn't invade the neighborhood. They waited for the residents to leave and when no one was watching, they moved in."

Sarahsaurus had physical traits usually associated with gigantic animals. For example, its thigh bones were long and straight like pillars, yet were not much larger than a human's thigh bones. Sarahsaurus shows that sauropodmorphs started out small and later evolved to a very large size.

"And so it's starting to look like some of our ideas about how size and evolution work are probably in need of revision," said Rowe, "and that some of the features we thought were tied to gigantism and the physics and mechanics of the bones may not be right."

Rowe is also intrigued by the new dinosaur's hands.

"We've never found anything like this in western North America," he said. "Its hand is smaller than my hand, but if you line the base of the thumbs up, this small hand is much more powerfully built than my hand and it has these big claws. It's a very strange animal. It's doing something with its hands that involved great strength and power, but we don't know what."

Sarahsaurus is named in honor of Sarah (Mrs. Ernest) Butler, an Austin philanthropist and long time supporter of the arts and sciences. Butler chaired a fundraising committee for the Dino Pit, an interactive exhibit Rowe helped create at the Austin Nature and Science Center that encourages children to dig up their own fossil replicas. The Dino Pit had been talked about for 20 years, but fundraising efforts stalled until Butler became chair.

"I told her if she really raised a million dollars to build the Dino Pit, I'd name a dinosaur after her," he said.

A team of researchers and students led by Rowe discovered Sarahsaurus on a field trip in Arizona in 1997. To reach publication, the team had to obtain excavation permits, excavate the site over three years, remove each fossil fragment from surrounding rock, measure and analyze each piece, and CT scan pieces to study internal structures.

"It took me 13 years, but I'm delighted by the great success of the Dino Pit, which hundreds of thousands of kids have now visited. And also that we had the luck to make a find of suitable importance to carry Sarah's name."

Researchers from North Carolina State University have patented technology that is expected to revolutionize the global energy and communications infrastructure – and create U.S. jobs in the process.

The researchers have developed the means to, for the first time, integrate gallium nitride (GaN) sensors and devices directly into silicon-based computer chips. “This enables the development of high-power – high-voltage and high-current – devices that are critical for the development of energy distribution devices, such as smart grid technology and high-frequency military communications,” says Dr. Jay Narayan, the John C. Fan Distinguished Chair Professor of Materials Science and Engineering at co-holder of the patent.

“GaN can handle more power than conventional transistors. And it can do so faster, because it can be made into single crystals that are integrated into a silicon chip – so electrons can move more quickly,” Narayan says.

“This integration of GaN on the silicon platform without any buffer layers has enabled the creation of multifunctional smart sensors, high-electron mobility transistors, high-power devices, and high-voltage switches for smart grids which impact our energy and environmental future,” Narayan explains.

Integrating GaN into silicon chips also makes a broader range of radio frequencies available, which will enable the development of advanced communication technologies. “These devices stand to meet the challenges of high-power, high-frequency and high bandwidth needs for advanced consumer applications and military satellite communications,” Narayan says.

“The United States still leads the world in innovation,” Narayan says. “But with the advent of the internet and instant communication, just doing innovative research isn’t enough anymore. We have to take steps to ensure that our advantage in innovation can be translated into products that create jobs here at home.”

“Direct integration of devices based on different types of semiconductors onto silicon chips is of considerable interest because it can enable different functionalities, such as lasers or higher performance transistors,” says Dr. Pradeep Fulay of the National Science Foundation (NSF), which funded the GaN research at NC State. “Professor Narayan has used a special process that allows integration of semiconducting materials like GaN on the silicon so as to create hybrid type computer chips. This research will likely lead to transistors with far superior power and performance sought for many commercial and military communication applications.”

Aeroplanes could be far better for the environment, create less noise and be safer for passengers thanks to new software developed by a University of Manchester academic.

Currently, airlines underestimate the amount of dangerous CO2 emissions they release in the atmosphere by up to 100% – meaning their aircraft are actually far more harmful than previously thought.

Now new software, called FLIGHT, can expertly predict the true level of emissions released and help the industry improve their environmental reputation – one of the issues about which they are most criticised.

The software, developed by Dr Antonio Filippone, can be easily downloaded from a website by airline companies.

While the potential to reduce emissions is arguably the most important uses of FLIGHT, it has a range of other functions from noise reduction to accident investigation and prevention.

Noise around airports is a huge issue, and FLIGHT can help air traffic controllers and airline authorities determine the best flight path for incoming and outgoing planes by providing exact measurements of noise given off on take –off and landing.

The software can also help airline companies with passenger load and the weight of luggage. Software copyright protection was arranged by The University of Manchester’s intellectual property commercialisation company, UMIP.

Another key function of FLIGHT is its role in accident investigation and prevention. Dr Filippone used FLIGHT to analyse the Boeing 777 which crashed at Heathrow in January 2008. The accident was caused by freezing of the valves which take the fuel into the engine.

Dr Filippone, from the School of Mechanical, Aerospace and Civil Engineering, said the carbon emission estimates currently provided by the airline industry are far from realistic.

He added: “These estimates do not account for factors such as climb and descent, and do not account for the actual aircraft load, as well as items including on-board services and bulk cargo.

“This is the method used to calculate the cost of the carbon that sometimes is added voluntarily to the cost of the travel tickets.

“Some stock markets have started trading carbon credits as ordinary commodities. It is recognised that a carbon trading scheme will have a huge impact on the profitability of the airlines.

“If an accurate method of calculating these emissions becomes available, the entire business will become more transparent.

“FLIGHT can optimise the airplane trajectories for minimum fuel consumption and can determine flight paths that avoid or minimise contrail formation.

“We can already very well map the noise aircraft will make from the ground, from geographical constraints like built-up areas, mountains and so on.

“But we can now look at real noise around airports. This can be used to reduce noise by altering flight paths.

“We can also compare one aircraft against another. This is the most impartial way of doing this, as individual companies have the means to do this but will always favour their own products.

“My method can show the improved routes – and help reduce fuel consumption and dangerous emissions. The software can lead to us having better and greener aircraft.”

Wind power is likely to play a large role in the future of sustainable, clean energy, but wide-scale adoption has remained elusive. Now, researchers have found wind farms’ effects on local temperatures and proposed strategies for mediating those effects, increasing the potential to expand wind farms to a utility-scale energy resource.

Led by University of Illinois professor of atmospheric sciences Somnath Baidya Roy, the research team will publish its findings in the Proceedings of the National Academy of Sciences. The paper will appear in the journal’s Online Early Edition.

Roy first proposed a model describing the local climate impact of wind farms in a 2004 paper. But that and similar subsequent studies have been based solely on models because of a lack of available data. In fact, no field data on temperature were publicly available for researchers to use, until Roy met Neil Kelley at a 2009 conference. Kelley, a principal scientist at the National Wind Technology Center, part of the National Renewable Energy Laboratory, had collected temperature data at a wind farm in San Gorgonio, Calif., for more than seven weeks in 1989.

Analysis of Kelley’s data corroborated Roy’s modeling studies and provided the first observation-based evidence of wind farms’ effects on local temperature. The study found that the area immediately surrounding turbines was slightly cooler during the day and slightly warmer at night than the rest of the region.

As a small-scale modeling expert, Roy was most interested in determining the processes that drive the daytime cooling and nocturnal warming effects. He identified an enhanced vertical mixing of warm and cool air in the atmosphere in the wake of the turbine rotors. As the rotors turn, they generate turbulence, like the wake of a speedboat motor. Upper-level air is pulled down toward the surface while surface-level air is pushed up, causing warmer and cooler air to mix.

The question for any given wind-farm site then becomes, will warming or cooling be the predominant effect?

“It depends on the location,” Roy said. “For example, in the Great Plains region, the winds are typically stronger at night, so the nocturnal effect may dominate. In a region where daytime winds are stronger – for example a sea breeze – then the cooling effect will dominate. It’s a very location-specific thing.”

Many wind farms, especially in the Midwestern United States, are located on farmland. According to Roy, the nocturnal warming effect could offer farmland some measure of frost protection and may even slightly extend the growing season.

Understanding the temperature effects and the processes that cause them also allows researchers to develop strategies to mitigate wind farms’ impact on local climate. The group identified two possible solutions. First, engineers could develop low-turbulence rotors. Less turbulence would not only lead to less vertical mixing and therefore less climate impact, but also would be more efficient for energy generation. However, research and development for such a device could be a costly, labor-intensive process.

The second mediation strategy is locational. Turbulence from the rotors has much less consequence in an already turbulent atmosphere. The researchers used global data to identify regions where temperature effects of large wind farms are likely to be low because of natural mixing in the atmosphere, providing ideal sites.

“These regions include the Midwest and the Great Plains as well as large parts of Europe and China,” Roy said. “This was a very coarse-scale study, but it would be easy to do a local-scale study to compare possible locations.”

Next, Roy’s group will generate models looking at both temperature and moisture transport using data from and simulations of commercial rotors and turbines. They also plan to study the extent of the thermodynamic effects, both in terms of local magnitude and of how far downwind the effects spread.

“The time is right for this kind of research so that, before we take a leap, we make sure it can be done right,” Roy said. “We want to identify the best way to sustain an explosive growth in wind energy over the long term. Wind energy is likely to be a part of the solution to the atmospheric carbon dioxide and the global warming problem. By indentifying impacts and potential mitigation strategies, this study will contribute to the long-term sustainability of wind power.”