The Sietch Bloghttp://blog.thesietch.org
The voice of The Sietch communityTue, 30 Sep 2014 17:48:09 +0000en-UShourly1http://wordpress.org/?v=4.0.6Wind, Solar Generation Capacity Catching Up With Nuclear Powerhttp://blog.thesietch.org/2014/09/30/wind-solar-generation-capacity-catching-up-with-nuclear-power/
http://blog.thesietch.org/2014/09/30/wind-solar-generation-capacity-catching-up-with-nuclear-power/#commentsTue, 30 Sep 2014 17:48:09 +0000http://blog.thesietch.org/?p=9070Advocates of nuclear energy have long been predicting its renaissance, yet this mode of producing electricity has been stalled for years. Renewable energy, by contrast, continues to expand rapidly, even if it still has a long way to go to catch up with fossil fuel power plants, writesWorldwatch Institute Senior Researcher Michael Renner in the Institute’s latest Vital Signs Online analysis (bit.ly/NuclearRE).

Nuclear energy’s share of global power production has declined steadily from a peak of 17.6 percent in 1996 to 10.8 percent in 2013. Renewables increased their share from 18.7 percent in 2000 to 22.7 percent in 2012.

Following a rapid rise from its beginnings in the mid-1950s, global nuclear power generating capacity peaked at 375.3 gigawatts (GW) in 2010. Capacity has since declined to 371.8 GW in 2013, according to the International Atomic Energy Agency. Adverse economics, concern about reactor safety and proliferation, and the unresolved question of what to do with nuclear waste have put the brakes on the industry.

In stark contrast, wind and solar power generating capacities are now on the same soaring trajectory that nuclear power was on in the 1970s and 1980s. Wind capacity of 320 GW in 2013 is equivalent to nuclear capacity in 1990. The 140 GW in solar photovoltaic (PV) capacity is still considerably smaller, but growing rapidly.

In recent years, renewable energy has attracted far greater investments than nuclear power. According to estimates by the International Energy Agency (IEA), nuclear investments averaged US$8 billion per year between 2000 and 2013, compared with $37 billion for solar PV and $43 billion for wind. Individual countries, of course, set diverging priorities, but nowhere did nuclear have a major role in power generation investments.

In contrast with investment priorities, research budgets still favor nuclear technologies. Among members of the IEA (most European countries, the United States, Canada, Japan, South Korea, Australia, and New Zealand), nuclear power has received the lion’s share of public energy research and development (R&D) budgets during the last four decades. Nuclear energy attracted $295 billion, or 51 percent, of total energy R&D spending between 1974 and 2012. But this number has declined over time, from a high of 73.6 percent in 1974 to 26 percent today. Renewable energy received a cumulative total of $59 billion during the same period (10.2 percent), but its share has risen year after year.

Because wind and solar power can be deployed at variable scales, and their facilities constructed in less time, these technologies are far more practical and affordable for most countries than nuclear power reactors. Worldwide, 31 countries are operating nuclear reactors on their territories. This compares to at least 85 countries that have commercial wind turbine installations.

The chances of a nuclear revival seem slim. Renewable energy, by contrast, appears to be on the right track. But it is clear that renewables have a long way to go before they can hope to supplant fossil fuels as the planet’s principal electricity source. The expansion of sources like wind and solar will have to become even more rapid in order to stave off climate disaster, and that in turn means that their fate cannot be left to the whims of the market alone.

Just as the Internet transformed the way people interact with information, cyber-physical systems (CPS) are transforming the way people interact with engineered systems. Cyber-physical systems integrate sensing, computation, control and networking into physical objects and infrastructure. Already, CPS innovations are driving development in sectors such as agriculture, energy, transportation, building design and automation, healthcare and advanced manufacturing. New advances in CPS will enable capability, adaptability, scalability, resiliency, safety, security and usability that will far exceed the simple embedded systems of today.

In December 2013, the SmartAmerica Challenge was launched as a way to bring together leaders from industry, academia and the government in order to show how cyber-physical systems (also known as 'the Internet of Things') can create jobs, new business opportunities and greater capabilities for citizens. Since then, 24 teams from more than 100 participating organizations have joined forces to tackle some of the biggest societal challenges of our time--from emergency response systems to next-generation transportation systems to smart healthcare.

On June 11, 2014, the teams will come together at the Washington D.C. Convention Center to showcase their vision for a smarter America driven by advances in CPS.

See the demonstrations and hear from speakers from the White House, the National Science Foundation, the Department of Commerce and other government agencies, companies and universities from across the United States.

To hear how advances in cyber-physical systems are impacting sectors such as smart manufacturing, healthcare, smart energy, intelligent transportation and disaster response, and how these advances deliver socio-economic benefits to America.

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.

Just as the Internet transformed the way people interact with information, cyber-physical systems (CPS) are transforming the way people interact with engineered systems. Cyber-physical systems integrate sensing, computation, control and networking into physical objects and infrastructure. Already, CPS innovations are driving development in sectors such as agriculture, energy, transportation, building design and automation, healthcare and advanced manufacturing. New advances in CPS will enable capability, adaptability, scalability, resiliency, safety, security and usability that will far exceed the simple embedded systems of today.

In December 2013, the SmartAmerica Challenge was launched as a way to bring together leaders from industry, academia and the government in order to show how cyber-physical systems (also known as ‘the Internet of Things’) can create jobs, new business opportunities and greater capabilities for citizens. Since then, 24 teams from more than 100 participating organizations have joined forces to tackle some of the biggest societal challenges of our time–from emergency response systems to next-generation transportation systems to smart healthcare.

On June 11, 2014, the teams will come together at the Washington D.C. Convention Center to showcase their vision for a smarter America driven by advances in CPS.

See the demonstrations and hear from speakers from the White House, the National Science Foundation, the Department of Commerce and other government agencies, companies and universities from across the United States.

To hear how advances in cyber-physical systems are impacting sectors such as smart manufacturing, healthcare, smart energy, intelligent transportation and disaster response, and how these advances deliver socio-economic benefits to America.

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.

]]>http://blog.thesietch.org/2014/06/09/leading-researchers-in-cyber-physical-systems-to-showcase-innovations-at-smartamerica-expo/feed/0How much fertilizer is too much for Earth’s climate?http://blog.thesietch.org/2014/06/09/how-much-fertilizer-is-too-much-for-earths-climate/
http://blog.thesietch.org/2014/06/09/how-much-fertilizer-is-too-much-for-earths-climate/#commentsMon, 09 Jun 2014 09:21:00 +0000http://blog.thesietch.org/?guid=11b02a57ccb1aa1a1e98c70d066c9642DiscoveryHow much fertilizer is too much for Earth's climate?

Helping farmers around the globe apply more precise amounts of fertilizer nitrogen can combat climate change.

That's the conclusion of a study published this week in the journal Proceedings of the National Academy of Sciences. In the paper, researchers at Michigan State University (MSU) provide an improved prediction of nitrogen fertilizer's contribution to greenhouse gas emissions from agricultural fields.

The study uses data from around the world to show that emissions of nitrous oxide (N2O), a greenhouse gas produced in soil following nitrogen addition, rise faster than previously expected when fertilizer rates exceed crop needs.

Nitrous oxide is the third most important greenhouse gas, behind carbon dioxide and methane.

Agriculture accounts for about 80 percent of human-caused nitrous oxide emissions worldwide, which have increased substantially in recent years due to increased nitrogen fertilizer use.

"Our motivation is to learn where to best target agricultural efforts to slow global warming," says MSU scientist Phil Robertson. Robertson is also director of the National Science Foundation (NSF) Kellogg Biological Station Long-term Ecological Research (LTER) site, one of 25 such NSF LTER sites around the globe and senior author of the paper.

"Agriculture accounts for 8 to 14 percent of all greenhouse gas production globally. We're showing how farmers can help reduce this number by applying nitrogen fertilizer more precisely."

The production of nitrous oxide can be greatly reduced if the amount of fertilizer needed by crops is exactly the amount that's applied to farmers' fields.

When plants' nitrogen needs are matched with the nitrogen that's supplied, fertilizer has substantially less effect on greenhouse gas emissions, Robertson says.

"These results vastly improve the ability of research to inform climate change, food security and the economic health of the world's farmers," says Saran Twombly, a program director in NSF's Division of Environmental Biology, which funded the research through the LTER Program.

Lead author and MSU researcher Iurii Shcherbak notes that the research is especially applicable to fertilizer practices in under-fertilized areas such as sub-Saharan Africa.

"Because nitrous oxide emissions won't be accelerated by fertilizers until crops' nitrogen needs are met, more nitrogen fertilizer can be added to under-fertilized crops without much affecting emissions," says Shcherbak.

Adding less nitrogen to over-fertilized crops elsewhere, however, would deliver major reductions to greenhouse gas emissions in those regions.

The study provides support for expanding the use of carbon credits to pay farmers for better fertilizer management and offers a framework for using this credit system around the world.

Carbon credits for fertilizer management are now available to U.S. corn farmers, says Robertson.

The research was also funded by MSU and by the U.S. Department of Energy's Great Lakes Bioenergy Research Center and the Electric Power Research Institute.

Helping farmers around the globe apply more precise amounts of fertilizer nitrogen can combat climate change.

That’s the conclusion of a study published this week in the journal Proceedings of the National Academy of Sciences. In the paper, researchers at Michigan State University (MSU) provide an improved prediction of nitrogen fertilizer’s contribution to greenhouse gas emissions from agricultural fields.

The study uses data from around the world to show that emissions of nitrous oxide (N2O), a greenhouse gas produced in soil following nitrogen addition, rise faster than previously expected when fertilizer rates exceed crop needs.

Nitrous oxide is the third most important greenhouse gas, behind carbon dioxide and methane.

Agriculture accounts for about 80 percent of human-caused nitrous oxide emissions worldwide, which have increased substantially in recent years due to increased nitrogen fertilizer use.

“Our motivation is to learn where to best target agricultural efforts to slow global warming,” says MSU scientist Phil Robertson. Robertson is also director of the National Science Foundation (NSF) Kellogg Biological Station Long-term Ecological Research (LTER) site, one of 25 such NSF LTER sites around the globe and senior author of the paper.

“Agriculture accounts for 8 to 14 percent of all greenhouse gas production globally. We’re showing how farmers can help reduce this number by applying nitrogen fertilizer more precisely.”

The production of nitrous oxide can be greatly reduced if the amount of fertilizer needed by crops is exactly the amount that’s applied to farmers’ fields.

When plants’ nitrogen needs are matched with the nitrogen that’s supplied, fertilizer has substantially less effect on greenhouse gas emissions, Robertson says.

“These results vastly improve the ability of research to inform climate change, food security and the economic health of the world’s farmers,” says Saran Twombly, a program director in NSF’s Division of Environmental Biology, which funded the research through the LTER Program.

Lead author and MSU researcher Iurii Shcherbak notes that the research is especially applicable to fertilizer practices in under-fertilized areas such as sub-Saharan Africa.

“Because nitrous oxide emissions won’t be accelerated by fertilizers until crops’ nitrogen needs are met, more nitrogen fertilizer can be added to under-fertilized crops without much affecting emissions,” says Shcherbak.

Adding less nitrogen to over-fertilized crops elsewhere, however, would deliver major reductions to greenhouse gas emissions in those regions.

The study provides support for expanding the use of carbon credits to pay farmers for better fertilizer management and offers a framework for using this credit system around the world.

Carbon credits for fertilizer management are now available to U.S. corn farmers, says Robertson.

The research was also funded by MSU and by the U.S. Department of Energy’s Great Lakes Bioenergy Research Center and the Electric Power Research Institute.

Just in time for World Oceans Day on June 8, cometh El Niño. But is El Niño really on the horizon? How certain are we of its arrival? And how will we know it's here? What effect will it have on the weather, on coastal species and on what's on our dinner tables?

To find out, the National Science Foundation (NSF) talked with biological oceanographer Mark Ohman and physical oceanographer Dan Rudnick of California's Scripps Institution of Oceanography. Their work is funded by NSF's Division of Ocean Sciences and Division of Environmental Biology.

1) What is El Niño?

(Ohman) El Niño is the formation of warmer-than-usual ocean waters in the equatorial Pacific, with extensive temperature changes along the coast of South America during the month of December--hence the Spanish name "El Niño," the Christmas child. Scientists refer to the phenomenon as the El Niño-Southern Oscillation (ENSO). Its warm ocean phase is termed El Niño, and cool ocean phase La Niña.

2) Is El Niño predictable?

(Rudnick) Yes, to some extent. Scientists have identified the precursors of an El Niño; observations to monitor them are taking place near the equator. These observations are used in sophisticated models to predict the timing and magnitude of a developing El Niño. Right now, the models show anything from a weak to a strong El Niño ahead.

3) How do we know that changes in the ocean are the result of El Niño?

(Ohman) El Niño is the strongest year-to-year "signal" on Earth, with distinct temperature and precipitation changes over land and in the sea. Because the ocean is variable on many time scales (tidal, seasonal, year-to-year and decade-to-decade), it's essential to have a baseline of ocean measurements against which to measure departures from normal conditions.

Scientists at the NSF California Current Ecosystem Long-Term Ecological Research site, located in Southern California waters, have access to records of ocean conditions as far back as 1916.

4) Are all El Niños alike?

(Ohman) Not at all. Not only do El Niños vary in intensity, there are at least two major types. In one El Niño, termed Eastern Pacific, the most extreme temperature changes happen off the South American coast. In Central Pacific (CP) El Niños, the center of ocean temperature changes is much farther to the west. Some evidence suggests that the frequency of CP El Niños may be increasing.

(Rudnick) Ultimately every El Niño is different, and only some will strongly affect the coasts of the Americas.

5) When are the effects of El Niño the strongest?

(Ohman) The development of an El Niño is seasonal. The first ocean temperature changes usually begin during the Northern summer (June through September) then continue to grow, reaching their maximum during winter, from November to the following January. But precursors can sometimes be detected as early as February or March of the year of an El Niño's onset.

6) How often do El Niños occur, and how long do they last?

(Ohman) El Niños happen about every two to seven years. The last one was in 2009-10. Their duration is variable, but is usually six to eight months along the equator, with shorter time periods in higher latitudes. There have been exceptional cases of very long El Niños that lasted for two or more years, such as in 1957-59.

7) Are there new ways of observing developing El Niños?

(Rudnick) Yes, we're doing transects--criss-crossings of the ocean--using bullet-shaped, winged robotic gliders that collect underwater data. They're part of a project called Repeat Observations by Gliders in the Equatorial Region (ROGER).

These futuristic-looking gliders, called Spray gliders, traverse the oceans under their own power and are taking measurements in the Pacific Ocean near the Galapagos Islands. The information that returns with a glider tells us how the ocean is changing, and whether those changes indicate the coming of an El Niño.

8) How do the Spray gliders work?

(Rudnick) Spray gliders dive from the surface down to 1,000 meters (3,281 feet) and back, completing a cycle in six hours and covering six kilometers (3.7 miles) during that time.

The gliders carry sensors to measure temperature, salinity, current velocity, chlorophyll fluorescence (a measure of the abundance of phytoplankton), and acoustic backscatter (a measure of zooplankton). Spray gliders are launched for missions lasting about 100 days.

9) How do you know when to send out the gliders?

(Rudnick) The ROGER project wasn't originally designed to observe an El Niño, but the gliders were always capable of doing so. With a scientific project funded by NSF for two years, we couldn't realistically expect to catch an El Niño.

But science does involve serendipity, and we're fortunate to have the gliders in position just as an El Niño is appearing. We expect our data to include the most high-resolution repeated ocean transects ever done across the equator during an El Niño. Results from the gliders are showing the classic signs of an El Niño, including a strengthening equatorial undercurrent.

10) What effects will El Niño have on marine ecosystems along the U.S. West Coast?

(Ohman) During El Niño, the spawning grounds of coastal fish like sardines and anchovies often move closer to the coast. As warming waters from the open ocean come ever nearer to the California coast, cool upwelled water is found mostly along the edge of the land.

Warm-water plankton and fish may be transported far to the north of their normal ranges. In some El Niños, species that live along the coast of Baja California, Mexico, may be found as far north as off British Columbia.

11)Do seabirds and marine mammals respond to El Niño?

(Ohman) It all comes down to where the fish are. Some seabirds, especially those with limited foraging ranges or narrow food preferences, may have reduced reproductive success during an El Niño. California sea lions may have less fish prey available and therefore depressed birth weights of pups. Some whales, dolphins and porpoises may move to different foraging grounds where the fishing is better.

12) Will fisheries off California be affected?

(Ohman) El Niño may have a substantial effect on the catch of, for example, market squid, one of the most commercially important species off California. The spawning of this cool-water species may be severely curtailed, or take place in deeper waters than usual.

During an El Niño, U.S. West Coast sportfishers often catch more warm-water fish such as yellowfin tuna, dolphinfish (dorado), and yellowtail, and fewer cool-water fish like rockfish and lingcod. What's on your dinner table may, for a time, look just a bit different.

Just in time for World Oceans Day on June 8, cometh El Niño. But is El Niño really on the horizon? How certain are we of its arrival? And how will we know it’s here? What effect will it have on the weather, on coastal species and on what’s on our dinner tables?

To find out, the National Science Foundation (NSF) talked with biological oceanographer Mark Ohman and physical oceanographer Dan Rudnick of California’s Scripps Institution of Oceanography. Their work is funded by NSF’s Division of Ocean Sciences and Division of Environmental Biology.

1) What is El Niño?

(Ohman) El Niño is the formation of warmer-than-usual ocean waters in the equatorial Pacific, with extensive temperature changes along the coast of South America during the month of December–hence the Spanish name “El Niño,” the Christmas child. Scientists refer to the phenomenon as the El Niño-Southern Oscillation (ENSO). Its warm ocean phase is termed El Niño, and cool ocean phase La Niña.

2) Is El Niño predictable?

(Rudnick) Yes, to some extent. Scientists have identified the precursors of an El Niño; observations to monitor them are taking place near the equator. These observations are used in sophisticated models to predict the timing and magnitude of a developing El Niño. Right now, the models show anything from a weak to a strong El Niño ahead.

3) How do we know that changes in the ocean are the result of El Niño?

(Ohman) El Niño is the strongest year-to-year “signal” on Earth, with distinct temperature and precipitation changes over land and in the sea. Because the ocean is variable on many time scales (tidal, seasonal, year-to-year and decade-to-decade), it’s essential to have a baseline of ocean measurements against which to measure departures from normal conditions.

Scientists at the NSF California Current Ecosystem Long-Term Ecological Research site, located in Southern California waters, have access to records of ocean conditions as far back as 1916.

4) Are all El Niños alike?

(Ohman) Not at all. Not only do El Niños vary in intensity, there are at least two major types. In one El Niño, termed Eastern Pacific, the most extreme temperature changes happen off the South American coast. In Central Pacific (CP) El Niños, the center of ocean temperature changes is much farther to the west. Some evidence suggests that the frequency of CP El Niños may be increasing.

(Rudnick) Ultimately every El Niño is different, and only some will strongly affect the coasts of the Americas.

5) When are the effects of El Niño the strongest?

(Ohman) The development of an El Niño is seasonal. The first ocean temperature changes usually begin during the Northern summer (June through September) then continue to grow, reaching their maximum during winter, from November to the following January. But precursors can sometimes be detected as early as February or March of the year of an El Niño’s onset.

6) How often do El Niños occur, and how long do they last?

(Ohman) El Niños happen about every two to seven years. The last one was in 2009-10. Their duration is variable, but is usually six to eight months along the equator, with shorter time periods in higher latitudes. There have been exceptional cases of very long El Niños that lasted for two or more years, such as in 1957-59.

7) Are there new ways of observing developing El Niños?

(Rudnick) Yes, we’re doing transects–criss-crossings of the ocean–using bullet-shaped, winged robotic gliders that collect underwater data. They’re part of a project called Repeat Observations by Gliders in the Equatorial Region (ROGER).

These futuristic-looking gliders, called Spray gliders, traverse the oceans under their own power and are taking measurements in the Pacific Ocean near the Galapagos Islands. The information that returns with a glider tells us how the ocean is changing, and whether those changes indicate the coming of an El Niño.

8) How do the Spray gliders work?

(Rudnick) Spray gliders dive from the surface down to 1,000 meters (3,281 feet) and back, completing a cycle in six hours and covering six kilometers (3.7 miles) during that time.

The gliders carry sensors to measure temperature, salinity, current velocity, chlorophyll fluorescence (a measure of the abundance of phytoplankton), and acoustic backscatter (a measure of zooplankton). Spray gliders are launched for missions lasting about 100 days.

9) How do you know when to send out the gliders?

(Rudnick) The ROGER project wasn’t originally designed to observe an El Niño, but the gliders were always capable of doing so. With a scientific project funded by NSF for two years, we couldn’t realistically expect to catch an El Niño.

But science does involve serendipity, and we’re fortunate to have the gliders in position just as an El Niño is appearing. We expect our data to include the most high-resolution repeated ocean transects ever done across the equator during an El Niño. Results from the gliders are showing the classic signs of an El Niño, including a strengthening equatorial undercurrent.

10) What effects will El Niño have on marine ecosystems along the U.S. West Coast?

(Ohman) During El Niño, the spawning grounds of coastal fish like sardines and anchovies often move closer to the coast. As warming waters from the open ocean come ever nearer to the California coast, cool upwelled water is found mostly along the edge of the land.

Warm-water plankton and fish may be transported far to the north of their normal ranges. In some El Niños, species that live along the coast of Baja California, Mexico, may be found as far north as off British Columbia.

11)Do seabirds and marine mammals respond to El Niño?

(Ohman) It all comes down to where the fish are. Some seabirds, especially those with limited foraging ranges or narrow food preferences, may have reduced reproductive success during an El Niño. California sea lions may have less fish prey available and therefore depressed birth weights of pups. Some whales, dolphins and porpoises may move to different foraging grounds where the fishing is better.

12) Will fisheries off California be affected?

(Ohman) El Niño may have a substantial effect on the catch of, for example, market squid, one of the most commercially important species off California. The spawning of this cool-water species may be severely curtailed, or take place in deeper waters than usual.

During an El Niño, U.S. West Coast sportfishers often catch more warm-water fish such as yellowfin tuna, dolphinfish (dorado), and yellowtail, and fewer cool-water fish like rockfish and lingcod. What’s on your dinner table may, for a time, look just a bit different.

Supernovas are often thought of as the tremendous explosions that mark the ends of massive stars' lives. While this is true, not all supernovas occur in this fashion. A common supernova class, called Type Ia, involves the detonation of white dwarfs -- small, dense stars that are already dead.

New results from NASA's Spitzer Space Telescope have revealed a rare example of Type Ia explosion, in which a dead star "fed" off an aging star like a cosmic zombie, triggering a blast. The results help researchers piece together how these powerful and diverse events occur.

"It's kind of like being a detective," said Brian Williams of NASA's Goddard Space Flight Center in Greenbelt, Maryland, lead author of a study submitted to the Astrophysical Journal. "We look for clues in the remains to try to figure out what happened, even though we weren't there to see it."

Supernovas are essential factories in the cosmos, churning out heavy metals, including the iron contained in our blood. Type Ia supernovas tend to blow up in consistent ways, and thus have been used for decades to help scientists study the size and expansion of our universe. Researchers say that these events occur when white dwarfs -- the burnt-out corpses of stars like our sun -- explode.

Evidence has been mounting over the past 10 years that the explosions are triggered when two orbiting white dwarfs collide -- with one notable exception. Kepler's supernova, named after the astronomer Johannes Kepler, who was among those who witnessed it in 1604, is thought to have been preceded by just one white dwarf and an elderly, companion star called a red giant. Scientists know this because the remnant sits in a pool of gas and dust shed by the aging star.

Spitzer's new observations now find a second case of a supernova remnant resembling Kepler's. Called N103B, the roughly 1,000 year-old supernova remnant lies 160,000 light-years away in the Large Magellanic Cloud, a small galaxy near our Milky Way.

"It's like Kepler's older cousin," said Williams. He explained that N103B, though somewhat older than Kepler's supernova remnant, also lies in a cloud of gas and dust thought to have been blown off by an older companion star. "The region around the remnant is extraordinarily dense," he said. Unlike Kepler's supernova remnant, no historical sightings of the explosion that created N103B are recorded.

Both the Kepler and N103B explosions are thought to have unfolded as follows: an aging star orbits its companion -- a white dwarf. As the aging star molts, which is typical for older stars, some of the shed material falls onto the white dwarf. This causes the white dwarf to build up in mass, become unstable and explode.

According to the researchers, this scenario may be rare. While the pairing of white dwarfs and red giants was thought to underlie virtually all Type Ia supernovas as recently as a decade ago, scientists now think that collisions between two white dwarfs are the most common cause. The new Spitzer research highlights the complexity of these tremendous explosions and the variety of their triggers. The case of what makes a dead star rupture is still very much an unsolved mystery.

Supernovas are often thought of as the tremendous explosions that mark the ends of massive stars’ lives. While this is true, not all supernovas occur in this fashion. A common supernova class, called Type Ia, involves the detonation of white dwarfs — small, dense stars that are already dead.

New results from NASA’s Spitzer Space Telescope have revealed a rare example of Type Ia explosion, in which a dead star “fed” off an aging star like a cosmic zombie, triggering a blast. The results help researchers piece together how these powerful and diverse events occur.

“It’s kind of like being a detective,” said Brian Williams of NASA’s Goddard Space Flight Center in Greenbelt, Maryland, lead author of a study submitted to the Astrophysical Journal. “We look for clues in the remains to try to figure out what happened, even though we weren’t there to see it.”

Supernovas are essential factories in the cosmos, churning out heavy metals, including the iron contained in our blood. Type Ia supernovas tend to blow up in consistent ways, and thus have been used for decades to help scientists study the size and expansion of our universe. Researchers say that these events occur when white dwarfs — the burnt-out corpses of stars like our sun — explode.

Evidence has been mounting over the past 10 years that the explosions are triggered when two orbiting white dwarfs collide — with one notable exception. Kepler’s supernova, named after the astronomer Johannes Kepler, who was among those who witnessed it in 1604, is thought to have been preceded by just one white dwarf and an elderly, companion star called a red giant. Scientists know this because the remnant sits in a pool of gas and dust shed by the aging star.

Spitzer’s new observations now find a second case of a supernova remnant resembling Kepler’s. Called N103B, the roughly 1,000 year-old supernova remnant lies 160,000 light-years away in the Large Magellanic Cloud, a small galaxy near our Milky Way.

“It’s like Kepler’s older cousin,” said Williams. He explained that N103B, though somewhat older than Kepler’s supernova remnant, also lies in a cloud of gas and dust thought to have been blown off by an older companion star. “The region around the remnant is extraordinarily dense,” he said. Unlike Kepler’s supernova remnant, no historical sightings of the explosion that created N103B are recorded.

Both the Kepler and N103B explosions are thought to have unfolded as follows: an aging star orbits its companion — a white dwarf. As the aging star molts, which is typical for older stars, some of the shed material falls onto the white dwarf. This causes the white dwarf to build up in mass, become unstable and explode.

According to the researchers, this scenario may be rare. While the pairing of white dwarfs and red giants was thought to underlie virtually all Type Ia supernovas as recently as a decade ago, scientists now think that collisions between two white dwarfs are the most common cause. The new Spitzer research highlights the complexity of these tremendous explosions and the variety of their triggers. The case of what makes a dead star rupture is still very much an unsolved mystery.

The number of nerve cells in the human brain sounds impressive: 100 billion. And it is.

But neurons may make up as little as 15 percent of cells in the brain. The other cells are called glial cells, or glia.

Glia are the rising stars of the neuroscience universe. Once delegated to simply a supporting role for neurons, these cells are now thought to play an important part in early brain development, learning and memory.

A 2013 workshop funded by the National Science Foundation (NSF) enabled researchers who study learning and memory to get together (many for the first time) and reconsider glia's function.

"It was paradigm-shifting," said R. Douglas Fields, a neurobiologist at the National Institutes of Health and meeting organizer. "Everyone left enthused about the enormous potential for understanding brain function, especially learning and memory by studying how all the cells in the brain work together, rather than focusing exclusively on neurons."

When you learn something, how to catch a ball or use an equation, information is transmitted along the spindly arms of neurons via electrical signals. At the same time, glia called oligodendrocytes work to insulate these particular arms with a fatty substance called myelin so the information flows more efficiently.

Some studies show that glial cells known as astrocytes may have an even more active role in learning. Astrocytes may release chemicals that strengthen newly formed connections between neurons, making it more likely you'll be able to remember a new face, or the name of your co-worker's beloved golden retriever.

Understanding how we learn requires that scientists and engineers take a holistic approach to brain research.

The number of nerve cells in the human brain sounds impressive: 100 billion. And it is.

But neurons may make up as little as 15 percent of cells in the brain. The other cells are called glial cells, or glia.

Glia are the rising stars of the neuroscience universe. Once delegated to simply a supporting role for neurons, these cells are now thought to play an important part in early brain development, learning and memory.

A 2013 workshop funded by the National Science Foundation (NSF) enabled researchers who study learning and memory to get together (many for the first time) and reconsider glia’s function.

“It was paradigm-shifting,” said R. Douglas Fields, a neurobiologist at the National Institutes of Health and meeting organizer. “Everyone left enthused about the enormous potential for understanding brain function, especially learning and memory by studying how all the cells in the brain work together, rather than focusing exclusively on neurons.”

When you learn something, how to catch a ball or use an equation, information is transmitted along the spindly arms of neurons via electrical signals. At the same time, glia called oligodendrocytes work to insulate these particular arms with a fatty substance called myelin so the information flows more efficiently.

Some studies show that glial cells known as astrocytes may have an even more active role in learning. Astrocytes may release chemicals that strengthen newly formed connections between neurons, making it more likely you’ll be able to remember a new face, or the name of your co-worker’s beloved golden retriever.

Understanding how we learn requires that scientists and engineers take a holistic approach to brain research.

Beehives contribute to multidisciplinary study about how leaderless complex systems manage to get things done

When we refer to someone as the “queen bee,” we are suggesting the individual might be in charge of the situation. But, in fact, actual queen bees are not in charge of anything. Their job is to lay eggs, not to rule the hive.

With support from the National Science Foundation (NSF), entomologist Gene Robinson and mechanical engineer Harry Dankowicz at the University of Illinois, Urbana-Champaign have teamed up with psychologist Whitney Tabor at the University of Connecticut to study how coordination emerges in leaderless complex societies, such as a bee hive.

The researchers have also designed controlled situations to study how groups of humans manage to coordinate efforts and get things done, even in challenging situations in which there is no leader.

Ultimately, the research may contribute to solving challenges, such as the collapse of pollinating bee colonies or destructive behavior among groups of humans.

The research in this episode was supported by NSF award #124920, INSPIRE: Asynchronous communication, self-organization, and differentiation in human and insect networks. INSPIRE stands for Integrated NSF Support Promoting Interdisciplinary Research and Education.

]]>http://blog.thesietch.org/2014/06/01/no-leader-now-what/feed/0Harvesting sunlight to help feed and fuel the worldhttp://blog.thesietch.org/2014/05/30/harvesting-sunlight-to-help-feed-and-fuel-the-world/
http://blog.thesietch.org/2014/05/30/harvesting-sunlight-to-help-feed-and-fuel-the-world/#commentsFri, 30 May 2014 17:26:00 +0000http://blog.thesietch.org/?guid=bdd0fd91f60033f76f23e31daef522b4

Press Release 14-072Harvesting sunlight to help feed and fuel the world

Three U.S./U.K. funded projects have been awarded a total of almost $9 million in additional funding to continue research projects aimed at improving the efficiency of photosynthesis

Scientists are using novel methods to explore potential new ways to boost photosynthetic efficiency.Credit and Larger Version

May 30, 2014

Three research teams--each comprised of scientists from the United States and the United Kingdom--have been awarded a second round of funding to continue research on news ways to improve the efficiency of photosynthesis.

Societal benefits

The ultimate goal of this potentially high-impact research is to develop methods to increase yields of important crops that are harvested for food and sustainable biofuels. But if this research is successful, it may also be used to support reforestation efforts and efforts to increase the productivity of trees for the manufacture of wood and paper and thousands of other products that are derived from wood and chemicals extracted from trees. Another reason why photosynthesis is an important research topic: It has made the Earth hospitable for life by generating food and oxygen.

The second round of funding to the three refunded research teams is from the U.S.'s National Science Foundation (NSF) and the U.K.'s Biotechnology and Biological Sciences Research Council (BBSRC). This funding will total almost $9 million over three years. Each team is receiving additional funding because of the significant progress it achieved via its initial round of funding, which was also jointly awarded by NSF and the BBSRC in 2011.

Why and how can the efficiency of photosynthesis be increased?

A photosynthesizing organism uses sunlight and carbon dioxide to produce sugars that fuel the organism and release oxygen. But photosynthesis is a relatively inefficient process, usually capturing only about 5 percent of available energy, depending on how efficiency is measured. Nevertheless, some species of plants, algae and bacteria have evolved efficiency-boosting mechanisms that reduce energy losses or enhance carbon dioxide delivery to cells during photosynthesis.

Each of the three funded research teams is working, in a new and unique way, to improve, combine or engineer these types of efficiency-boosting mechanisms, so they may eventually be conferred on important crops that provide food or sustainable biofuels.

Scientists have long sought ways to increase the efficiency of photosynthesis but without, thus far, producing significant breakthroughs. The potentially transformational methods currently being pursued by the three funded teams were developed during an "Ideas Lab"--a workshop held in 2010 that was specially designed to generate innovative, potentially transformative research projects that might open longstanding bottlenecks to photosynthesis research.

If successful in helping to open such bottlenecks and generate ways to improve photosynthetic efficiency, any of the three re-funded research projects could provide critical support for efforts to address food and fuel challenges currently created by increasing human populations and other factors.

John Wingfield, NSF's assistant director for the Directorate of Biological Sciences, said, "Photosynthesis captures abundant and free solar energy and generates food and oxygen for the planet. Emerging technologies, like synthetic biology, are used in these potentially transformative projects to address the long-standing quest to increase efficiency of photosynthesis."

The three refunded projects:

1. Plug-and-play photosynthesis led by Anne Jones of Arizona State University: Some single-celled microbes capture solar energy and convert it to fuel for self-replication. Plug-and-play photosynthesis aims to distribute the capture and conversion of energy to two environments, so that each environment can be optimized for maximum efficiency for its role.

The plug-and-play team's overall goal is to capture unused energy, which would otherwise be dissipated, from a light-capturing photosynthetic cell--and transfer it to a second cell for fuel production. One way to carry out this energy transfer is to repurpose bacterial nanowires, which are tiny, electrically conductive wires that are present in some bacteria for reasons that are not yet completely understood.

These wires will be bioengineered to form an electrical bridge between light-capturing cells and fuel-producing cells--so that the wires will conduct energy from the former to the latter. To advance this project the plug-and-play team, together with other investigators, have corrected a false, but long accepted, mischaracterization of the biochemical composition of the bacterial nanowires and have thereby provided a new starting point for further study and engineering.

The research team is also working to develop another approach to intercellular energy transfer by creating new chemical pathways that would divert energy from the bacterial light-capturing cell to a designed biofuel-producing cell. The plug-and-play team has advanced this effort by developing a bioelectrochemical device that measures energy production by bacterial light-capturing cells.

2.Multi-Level Approaches for Generating Carbon Dioxide (MAGIC) led by John Golbeck of Pennsylvania State University: MAGIC is aimed at engineering a light-driven carbon dioxide pump that will increase the availability of carbon dioxide to an enzyme that promotes photosynthesis and will thereby increase photosynthetic efficiency.

To advance this effort, the team has, through genetic engineering, repurposed a light-sensitive protein, called halorhodopsin, which is found in a one-celled microbe called Natronomonas pharaonis; this protein helps the microbe maintain the correct chemical balance by pumping chloride into it. But the reengineered form of this protein instead pumps carbon dioxide, which is present as bicarbonate, into cells. To evaluate its pump's effectiveness, the team incorporated its light-driven bicarbonate pump into an artificial vesicle. This vesicle contains a dye whose brightness is proportional to carbon dioxide levels in the vesicle's interior--and therefore provides important information about the pump's usefulness. The team is preparing to incorporate its pump into plant cells to determine if resulting increases in the availability of carbon dioxide to plant cells will increase their growth.

3. Combining Algal and Plant Photosynthesis (CAPP) led by Martin Jonikas of Stanford University:Chlamydomonas, a unicellular algae, has a pyrenoid--a ball-shaped structure that helps the algae assimilate carbon to improve its photosynthetic efficiency. CAPP is aiming to, for the first time, transplant the algal pyrenoid and its associated components into higher plants--with hopes of improving these plants' photosynthetic efficiency and thus their productivity.

So far, the team has identified novel components of the pyrenoid. It has also made progress towards the development of a protein-based sensor that will be used to compare levels of bicarbonate in several cellular compartments in algae. This sensor will be used to help explain the algae's carbon concentrating mechanism and help evaluate the pyrenoid's effectiveness after it has been transplanted into higher plants.

Improving on nature

Jackie Hunter, BBSRC chief executive said, "Nature barely skims the surface when it comes to photosynthesis and making use of the sun's energy. There is huge room for improvement and these research projects are taking steps to help us to unlock hidden potential that could benefit us all. Using the sun's energy more efficiently means a greater potential to produce fuel, food, fibers, useful chemicals and much more."

Gregory Warr, an NSF program director, said, "These projects, if successful, could transform the way we generate the fuel, food, clothing and shelter that plants and microbes provide to us."

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.

Press Release 14-072Harvesting sunlight to help feed and fuel the world

Three U.S./U.K. funded projects have been awarded a total of almost $9 million in additional funding to continue research projects aimed at improving the efficiency of photosynthesis

Scientists are using novel methods to explore potential new ways to boost photosynthetic efficiency.Credit and Larger Version

May 30, 2014

Three research teams–each comprised of scientists from the United States and the United Kingdom–have been awarded a second round of funding to continue research on news ways to improve the efficiency of photosynthesis.

Societal benefits

The ultimate goal of this potentially high-impact research is to develop methods to increase yields of important crops that are harvested for food and sustainable biofuels. But if this research is successful, it may also be used to support reforestation efforts and efforts to increase the productivity of trees for the manufacture of wood and paper and thousands of other products that are derived from wood and chemicals extracted from trees. Another reason why photosynthesis is an important research topic: It has made the Earth hospitable for life by generating food and oxygen.

The second round of funding to the three refunded research teams is from the U.S.’s National Science Foundation (NSF) and the U.K.’s Biotechnology and Biological Sciences Research Council (BBSRC). This funding will total almost $9 million over three years. Each team is receiving additional funding because of the significant progress it achieved via its initial round of funding, which was also jointly awarded by NSF and the BBSRC in 2011.

Why and how can the efficiency of photosynthesis be increased?

A photosynthesizing organism uses sunlight and carbon dioxide to produce sugars that fuel the organism and release oxygen. But photosynthesis is a relatively inefficient process, usually capturing only about 5 percent of available energy, depending on how efficiency is measured. Nevertheless, some species of plants, algae and bacteria have evolved efficiency-boosting mechanisms that reduce energy losses or enhance carbon dioxide delivery to cells during photosynthesis.

Each of the three funded research teams is working, in a new and unique way, to improve, combine or engineer these types of efficiency-boosting mechanisms, so they may eventually be conferred on important crops that provide food or sustainable biofuels.

Scientists have long sought ways to increase the efficiency of photosynthesis but without, thus far, producing significant breakthroughs. The potentially transformational methods currently being pursued by the three funded teams were developed during an “Ideas Lab”–a workshop held in 2010 that was specially designed to generate innovative, potentially transformative research projects that might open longstanding bottlenecks to photosynthesis research.

If successful in helping to open such bottlenecks and generate ways to improve photosynthetic efficiency, any of the three re-funded research projects could provide critical support for efforts to address food and fuel challenges currently created by increasing human populations and other factors.

John Wingfield, NSF’s assistant director for the Directorate of Biological Sciences, said, “Photosynthesis captures abundant and free solar energy and generates food and oxygen for the planet. Emerging technologies, like synthetic biology, are used in these potentially transformative projects to address the long-standing quest to increase efficiency of photosynthesis.”

The three refunded projects:

1. Plug-and-play photosynthesis led by Anne Jones of Arizona State University: Some single-celled microbes capture solar energy and convert it to fuel for self-replication. Plug-and-play photosynthesis aims to distribute the capture and conversion of energy to two environments, so that each environment can be optimized for maximum efficiency for its role.

The plug-and-play team’s overall goal is to capture unused energy, which would otherwise be dissipated, from a light-capturing photosynthetic cell–and transfer it to a second cell for fuel production. One way to carry out this energy transfer is to repurpose bacterial nanowires, which are tiny, electrically conductive wires that are present in some bacteria for reasons that are not yet completely understood.

These wires will be bioengineered to form an electrical bridge between light-capturing cells and fuel-producing cells–so that the wires will conduct energy from the former to the latter. To advance this project the plug-and-play team, together with other investigators, have corrected a false, but long accepted, mischaracterization of the biochemical composition of the bacterial nanowires and have thereby provided a new starting point for further study and engineering.

The research team is also working to develop another approach to intercellular energy transfer by creating new chemical pathways that would divert energy from the bacterial light-capturing cell to a designed biofuel-producing cell. The plug-and-play team has advanced this effort by developing a bioelectrochemical device that measures energy production by bacterial light-capturing cells.

2.Multi-Level Approaches for Generating Carbon Dioxide (MAGIC) led by John Golbeck of Pennsylvania State University: MAGIC is aimed at engineering a light-driven carbon dioxide pump that will increase the availability of carbon dioxide to an enzyme that promotes photosynthesis and will thereby increase photosynthetic efficiency.

To advance this effort, the team has, through genetic engineering, repurposed a light-sensitive protein, called halorhodopsin, which is found in a one-celled microbe called Natronomonas pharaonis; this protein helps the microbe maintain the correct chemical balance by pumping chloride into it. But the reengineered form of this protein instead pumps carbon dioxide, which is present as bicarbonate, into cells. To evaluate its pump’s effectiveness, the team incorporated its light-driven bicarbonate pump into an artificial vesicle. This vesicle contains a dye whose brightness is proportional to carbon dioxide levels in the vesicle’s interior–and therefore provides important information about the pump’s usefulness. The team is preparing to incorporate its pump into plant cells to determine if resulting increases in the availability of carbon dioxide to plant cells will increase their growth.

3. Combining Algal and Plant Photosynthesis (CAPP) led by Martin Jonikas of Stanford University:Chlamydomonas, a unicellular algae, has a pyrenoid–a ball-shaped structure that helps the algae assimilate carbon to improve its photosynthetic efficiency. CAPP is aiming to, for the first time, transplant the algal pyrenoid and its associated components into higher plants–with hopes of improving these plants’ photosynthetic efficiency and thus their productivity.

So far, the team has identified novel components of the pyrenoid. It has also made progress towards the development of a protein-based sensor that will be used to compare levels of bicarbonate in several cellular compartments in algae. This sensor will be used to help explain the algae’s carbon concentrating mechanism and help evaluate the pyrenoid’s effectiveness after it has been transplanted into higher plants.

Improving on nature

Jackie Hunter, BBSRC chief executive said, “Nature barely skims the surface when it comes to photosynthesis and making use of the sun’s energy. There is huge room for improvement and these research projects are taking steps to help us to unlock hidden potential that could benefit us all. Using the sun’s energy more efficiently means a greater potential to produce fuel, food, fibers, useful chemicals and much more.”

Gregory Warr, an NSF program director, said, “These projects, if successful, could transform the way we generate the fuel, food, clothing and shelter that plants and microbes provide to us.”

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.

What should resource managers do when the eradication of an invasive species threatens an endangered one?

In results of a study published this week in the journal Science, researchers at the University of California, Davis, examine one such conundrum now taking place in San Francisco Bay.

The study was led by UC Davis researcher Adam Lampert.

"This work advances a framework for cost-effective management solutions to the conflict between removing invasive species and conserving biodiversity," said Alan Tessier, acting deputy division director in the National Science Foundation's (NSF) Directorate for Biological Sciences, which supported the research through NSF's Dynamics of Coupled Natural and Human Systems (CNH) Program.

CNH is also co-funded by NSF's Directorates for Geosciences and Social, Behavioral & Economic Sciences.

"The project exemplifies the goals of the CNH program," says Tessier, "which are to advance the understanding of complex systems involving humans and nature."

The California Clapper Rail--a bird found only in San Francisco Bay--depends on an invasive salt marsh cordgrass, hybrid Spartina, as nesting habitat.

Its native habitat has slowly vanished over recent decades, largely due to urban development and invasion by Spartina.

Study results show that, rather than moving as fast as possible with eradication and restoration plans, the best approach is to slow down the eradication of the invasive species until restoration or natural recovery of the system provides appropriate habitat for the endangered species.

"The whole management system needs to take longer, and you need to have much more flexibility in the timing of budget expenditures over a longer time-frame."

The scientists combined biological and economic data on Spartina and on the Clapper Rail to develop a modeling framework to balance conflicting management goals, including endangered species recovery and invasive species restoration, given fiscal limitations.

While more threatened and endangered species are becoming dependent on invasive species for habitat and food, examples of the study's specific conflict are relatively rare--for now.

Another case where the eradication of an invasive species threatened to compromise the recovery of an endangered plant or animal is in the southwestern United States, where an effort to eradicate Tamarisk was cancelled because the invasive tree provides nesting habitat for the endangered Southwestern Willow Flycatcher.

"As eradication programs increase in number, we expect this will be a more common conflict in the future," said paper co-author and UC Davis scientist Ted Grosholz.

Other co-authors include scientists James Sanchirico of UC Davis and Sunny Jardine of the University of Delaware.

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.

What should resource managers do when the eradication of an invasive species threatens an endangered one?

In results of a study published this week in the journal Science, researchers at the University of California, Davis, examine one such conundrum now taking place in San Francisco Bay.

The study was led by UC Davis researcher Adam Lampert.

“This work advances a framework for cost-effective management solutions to the conflict between removing invasive species and conserving biodiversity,” said Alan Tessier, acting deputy division director in the National Science Foundation’s (NSF) Directorate for Biological Sciences, which supported the research through NSF’s Dynamics of Coupled Natural and Human Systems (CNH) Program.

CNH is also co-funded by NSF’s Directorates for Geosciences and Social, Behavioral & Economic Sciences.

“The project exemplifies the goals of the CNH program,” says Tessier, “which are to advance the understanding of complex systems involving humans and nature.”

The California Clapper Rail–a bird found only in San Francisco Bay–depends on an invasive salt marsh cordgrass, hybrid Spartina, as nesting habitat.

Its native habitat has slowly vanished over recent decades, largely due to urban development and invasion by Spartina.

Study results show that, rather than moving as fast as possible with eradication and restoration plans, the best approach is to slow down the eradication of the invasive species until restoration or natural recovery of the system provides appropriate habitat for the endangered species.

“The whole management system needs to take longer, and you need to have much more flexibility in the timing of budget expenditures over a longer time-frame.”

The scientists combined biological and economic data on Spartina and on the Clapper Rail to develop a modeling framework to balance conflicting management goals, including endangered species recovery and invasive species restoration, given fiscal limitations.

While more threatened and endangered species are becoming dependent on invasive species for habitat and food, examples of the study’s specific conflict are relatively rare–for now.

Another case where the eradication of an invasive species threatened to compromise the recovery of an endangered plant or animal is in the southwestern United States, where an effort to eradicate Tamarisk was cancelled because the invasive tree provides nesting habitat for the endangered Southwestern Willow Flycatcher.

“As eradication programs increase in number, we expect this will be a more common conflict in the future,” said paper co-author and UC Davis scientist Ted Grosholz.

Other co-authors include scientists James Sanchirico of UC Davis and Sunny Jardine of the University of Delaware.

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.

Stars that are just beginning to coalesce out of cool swaths of dust and gas are showcased in this image from NASA's Spitzer Space Telescope and the Two Micron All Sky Survey (2MASS). Infrared light has been assigned colors we see with our eyes, revealing young stars in orange and yellow, and a central parcel of gas in blue. This area is hidden in visible-light views, but infrared light can travel through the dust, offering a peek inside the stellar hatchery.

The dark patch to the left of center is swaddled in so much dust, even the infrared light is blocked. It is within these dark wombs that stars are just beginning to take shape.

Called the Serpens Cloud Core, this star-forming region is located about 750 light-years away in Serpens, or the "Serpent," a constellation named after its resemblance to a snake in visible light. The region is noteworthy as it only contains stars of relatively low to moderate mass, and lacks any of the massive and incredibly bright stars found in larger star-forming regions like the Orion nebula. Our sun is a star of moderate mass. Whether it formed in a low-mass stellar region like Serpens, or a high-mass stellar region like Orion, is an ongoing mystery.

The inner Serpens Cloud Core is remarkably detailed in this image. It was assembled from 82 snapshots representing a whopping 16.2 hours of Spitzer observing time. The observations were made during Spitzer's "warm mission," a phase that began in 2009 after the observatory ran out of liquid coolant, as planned.

Most of the small dots in this image are stars located behind, or in front of, the Serpens nebula.

The 2MASS mission was a joint effort between the California Institute of Technology, Pasadena; the University of Massachusetts, Amherst; and NASA's Jet Propulsion Laboratory, also in Pasadena.

JPL manages the Spitzer Space Telescope mission for NASA's Science Mission Directorate, Washington. Science operations are conducted at the Spitzer Science Center at the California Institute of Technology in Pasadena. Spacecraft operations are based at Lockheed Martin Space Systems Company, Littleton, Colorado. Data are archived at the Infrared Science Archive housed at the Infrared Processing and Analysis Center at Caltech. Caltech manages JPL for NASA.

]]>

Stars that are just beginning to coalesce out of cool swaths of dust and gas are showcased in this image from NASA’s Spitzer Space Telescope and the Two Micron All Sky Survey (2MASS). Infrared light has been assigned colors we see with our eyes, revealing young stars in orange and yellow, and a central parcel of gas in blue. This area is hidden in visible-light views, but infrared light can travel through the dust, offering a peek inside the stellar hatchery.

The dark patch to the left of center is swaddled in so much dust, even the infrared light is blocked. It is within these dark wombs that stars are just beginning to take shape.

Called the Serpens Cloud Core, this star-forming region is located about 750 light-years away in Serpens, or the “Serpent,” a constellation named after its resemblance to a snake in visible light. The region is noteworthy as it only contains stars of relatively low to moderate mass, and lacks any of the massive and incredibly bright stars found in larger star-forming regions like the Orion nebula. Our sun is a star of moderate mass. Whether it formed in a low-mass stellar region like Serpens, or a high-mass stellar region like Orion, is an ongoing mystery.

The inner Serpens Cloud Core is remarkably detailed in this image. It was assembled from 82 snapshots representing a whopping 16.2 hours of Spitzer observing time. The observations were made during Spitzer’s “warm mission,” a phase that began in 2009 after the observatory ran out of liquid coolant, as planned.

Most of the small dots in this image are stars located behind, or in front of, the Serpens nebula.

The 2MASS mission was a joint effort between the California Institute of Technology, Pasadena; the University of Massachusetts, Amherst; and NASA’s Jet Propulsion Laboratory, also in Pasadena.

JPL manages the Spitzer Space Telescope mission for NASA’s Science Mission Directorate, Washington. Science operations are conducted at the Spitzer Science Center at the California Institute of Technology in Pasadena. Spacecraft operations are based at Lockheed Martin Space Systems Company, Littleton, Colorado. Data are archived at the Infrared Science Archive housed at the Infrared Processing and Analysis Center at Caltech. Caltech manages JPL for NASA.

The number of citizens and permanent residents enrolled in science and engineering (S&E) graduate programs in the United States declined in 2012, while the number of foreign students studying on temporary visas increased, according to new data from the National Science Foundation (NSF).

The 1.7 percent drop in U.S. citizens and permanent residents was countered by a 4.3 percent increase in enrollment of foreign S&E graduate students on temporary visas. Overall growth of S&E graduate student enrollment stalled for the second year in a row in 2012, the most recent year for which data are available, after experiencing annual increases of 2 to 3 percent from 2005 to 2010. S&E graduate enrollment grew by less than 1 percent in 2011 and 2012.

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.

The number of citizens and permanent residents enrolled in science and engineering (S&E) graduate programs in the United States declined in 2012, while the number of foreign students studying on temporary visas increased, according to new data from the National Science Foundation (NSF).

The 1.7 percent drop in U.S. citizens and permanent residents was countered by a 4.3 percent increase in enrollment of foreign S&E graduate students on temporary visas. Overall growth of S&E graduate student enrollment stalled for the second year in a row in 2012, the most recent year for which data are available, after experiencing annual increases of 2 to 3 percent from 2005 to 2010. S&E graduate enrollment grew by less than 1 percent in 2011 and 2012.

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.

Astronomers have found cosmic clumps so dark, dense and dusty that they throw the deepest shadows ever recorded. Infrared observations from NASA's Spitzer Space Telescope of these blackest-of-black regions paradoxically light the way to understanding how the brightest stars form.

The clumps represent the darkest portions of a huge, cosmic cloud of gas and dust located about 16,000 light-years away. A new study takes advantage of the shadows cast by these clumps to measure the cloud's structure and mass.

The dusty cloud, results suggest, will likely evolve into one of the most massive young clusters of stars in our galaxy. The densest clumps will blossom into the cluster's biggest, most powerful stars, called O-type stars, the formation of which has long puzzled scientists. These hulking stars have major impacts on their local stellar environments while also helping to create the heavy elements needed for life.

"The map of the structure of the cloud and its dense cores we have made in this study reveals a lot of fine details about the massive star and star cluster formation process," said Michael Butler, a postdoctoral researcher at the University of Zurich in Switzerland and lead author of the study, published in The Astrophysical Journal Letters.

The state-of-the-art map has helped pin down the cloud's mass to the equivalent of 7,000 suns packed into an area spanning about 50 light-years in diameter. The map comes courtesy of Spitzer observing in infrared light, which can more easily penetrate gas and dust than short-wavelength visible light. The effect is similar to that behind the deep red color of sunsets on smoggy days -- longer-wavelength red light more readily reaches our eyes through the haze, which scatters and absorbs shorter-wavelength blue light. In this case, however, the densest pockets of star-forming material within the cloud are so thick with dust that they scatter and block not only visible light, but also almost all background infrared light.

Observing in infrared lets scientists peer into otherwise inscrutable cosmic clouds and catch the early phases of star and star cluster formation. Typically, Spitzer detects infrared light emitted by young stars still shrouded in their dusty cocoons. For the new study, astronomers instead gauged the amount of background infrared light obscured by the cloud, using these shadows to infer where material had lumped together within the cloud. These blobs of gas and dust will eventually collapse gravitationally to make hundreds of thousands of new stars.

Most stars in the universe, perhaps our sun included, are thought to have formed en masse in these sorts of environments. Clusters of low-mass stars are quite common and well-studied. But clusters giving birth to higher-mass stars, like the cluster described here, are scarce and distant, which makes them harder to examine.

"In this rare kind of cloud, Spitzer has provided us with an important picture of massive star cluster formation caught in its earliest, embryonic stages," said Jonathan Tan, an associate professor of astronomy at the University of Florida, Gainesville, and co-author of the study.

The new findings will also help reveal how O-type stars form. O-type stars shine a brilliant white-blue, possess at least 16 times the sun's mass and have surface temperatures above 54,000 degrees Fahrenheit (30,000 degrees Celsius). These giant stars have a tremendous influence on their local stellar neighborhoods. Their winds and intense radiation blow away material that might draw together to create other stars and planetary systems. O-type stars are short-lived and quickly explode as supernovas, releasing enormous amounts of energy and forging the heavy elements needed to form planets and living organisms.

Researchers are not sure how, in O-type stars, it is possible for material to accumulate on scales of tens to 100 times the mass of our sun without dissipating or breaking down into multiple, smaller stars.

"We still do not have a settled theory or explanation of how these massive stars form," said Tan. "Therefore, detailed measurements of the birth clouds of future massive stars, as we have recorded in this study, are important for guiding new theoretical understanding."

Astronomers have found cosmic clumps so dark, dense and dusty that they throw the deepest shadows ever recorded. Infrared observations from NASA’s Spitzer Space Telescope of these blackest-of-black regions paradoxically light the way to understanding how the brightest stars form.

The clumps represent the darkest portions of a huge, cosmic cloud of gas and dust located about 16,000 light-years away. A new study takes advantage of the shadows cast by these clumps to measure the cloud’s structure and mass.

The dusty cloud, results suggest, will likely evolve into one of the most massive young clusters of stars in our galaxy. The densest clumps will blossom into the cluster’s biggest, most powerful stars, called O-type stars, the formation of which has long puzzled scientists. These hulking stars have major impacts on their local stellar environments while also helping to create the heavy elements needed for life.

“The map of the structure of the cloud and its dense cores we have made in this study reveals a lot of fine details about the massive star and star cluster formation process,” said Michael Butler, a postdoctoral researcher at the University of Zurich in Switzerland and lead author of the study, published in The Astrophysical Journal Letters.

The state-of-the-art map has helped pin down the cloud’s mass to the equivalent of 7,000 suns packed into an area spanning about 50 light-years in diameter. The map comes courtesy of Spitzer observing in infrared light, which can more easily penetrate gas and dust than short-wavelength visible light. The effect is similar to that behind the deep red color of sunsets on smoggy days — longer-wavelength red light more readily reaches our eyes through the haze, which scatters and absorbs shorter-wavelength blue light. In this case, however, the densest pockets of star-forming material within the cloud are so thick with dust that they scatter and block not only visible light, but also almost all background infrared light.

Observing in infrared lets scientists peer into otherwise inscrutable cosmic clouds and catch the early phases of star and star cluster formation. Typically, Spitzer detects infrared light emitted by young stars still shrouded in their dusty cocoons. For the new study, astronomers instead gauged the amount of background infrared light obscured by the cloud, using these shadows to infer where material had lumped together within the cloud. These blobs of gas and dust will eventually collapse gravitationally to make hundreds of thousands of new stars.

Most stars in the universe, perhaps our sun included, are thought to have formed en masse in these sorts of environments. Clusters of low-mass stars are quite common and well-studied. But clusters giving birth to higher-mass stars, like the cluster described here, are scarce and distant, which makes them harder to examine.

“In this rare kind of cloud, Spitzer has provided us with an important picture of massive star cluster formation caught in its earliest, embryonic stages,” said Jonathan Tan, an associate professor of astronomy at the University of Florida, Gainesville, and co-author of the study.

The new findings will also help reveal how O-type stars form. O-type stars shine a brilliant white-blue, possess at least 16 times the sun’s mass and have surface temperatures above 54,000 degrees Fahrenheit (30,000 degrees Celsius). These giant stars have a tremendous influence on their local stellar neighborhoods. Their winds and intense radiation blow away material that might draw together to create other stars and planetary systems. O-type stars are short-lived and quickly explode as supernovas, releasing enormous amounts of energy and forging the heavy elements needed to form planets and living organisms.

Researchers are not sure how, in O-type stars, it is possible for material to accumulate on scales of tens to 100 times the mass of our sun without dissipating or breaking down into multiple, smaller stars.

“We still do not have a settled theory or explanation of how these massive stars form,” said Tan. “Therefore, detailed measurements of the birth clouds of future massive stars, as we have recorded in this study, are important for guiding new theoretical understanding.”

Creating new brain imaging techniques is one of today's greatest engineering challenges.

The incentive for a good picture is big: looking at the brain helps us to understand how we move, how we think and how we learn. Recent advances in imaging enable us to see what the brain is doing more precisely across space and time and in more realistic conditions.

Researchers at the Massachusetts Institute of Technology and the University of Vienna achieved simultaneous functional imaging of all the neurons of the transparent roundworm C. elegans. This technique is the first that can generate 3-D movies of entire brains at the millisecond timescale.

The significance of this achievement becomes clear in light of the many engineering complexities associated with brain imaging techniques.

An imaging wish list

When 33 brain researchers put their minds together at a workshop funded by the National Science Foundation in August 2013, they identified three of the biggest challenges in mapping the human brain for better understanding, diagnosis and treatment.

Challenge one: High spatiotemporal resolution neuroimaging. Existing brain imaging technologies offer different advantages and disadvantages with respect to resolution. A method such as functional MRI that offers excellent spatial resolution (to several millimeters) can provide snapshots of brain activity in the order of seconds. Other methods, such as electroencephalography (EEG), provide precise information about brain activity over time (to the millisecond) but yield fuzzy information about the location.

The ability to conduct functional imaging of the brain, with high resolution in both space and time, would enable researchers to tease out some of the brain's most intricate workings. For example, each half of the thalamus--the brain's go-to structure for relaying sensory and motor information and a potential target for deep brain stimulation--has 13 functional areas in a package the size of a walnut.

With better spatial resolution, researchers would have an easier time determining which areas of the brain are involved in specific activities. This could ultimately help them identify more precise targets for stimulation, maximizing therapeutic benefits while minimizing unnecessary side effects.

In addition, researchers wish to combine data from different imaging techniques to study and model the brain at different levels, from molecules to cellular networks to the whole brain.

Challenge two: Perturbation-based neuroimaging. Much that we know about the brain relies on studies of dysfunction, when a problem such as a tumor or stroke affects a specific part of the brain and a correlating change in brain function can be observed.

But researchers also rely on techniques that temporarily ramp up, or turn off, brain activity in certain regions. What if the effects of such modifications on brain function could then be captured with neuroimaging techniques?

Being able to observe what happens when certain parts of the brain are activated could help researchers determine brain areas' functions and provide critical guidance for brain therapies.

Challenge three: Neuroimaging in naturalistic environments. Researchers aim to create new noninvasive methods for imaging the brain while a person interacts with his or her surroundings. This ability will become more valuable as new technologies that interface with the brain are developed.

For example, a patient undergoing brain therapy at home may choose to send information to his or her physician remotely rather than go to an office for frequent check-ups. The engineering challenges of this scenario include the creation of low-cost, wearable technologies to monitor the brain as well as the technical capability to differentiate between signs of trouble and normal fluctuations in brain activity during daily routines.

Other challenges the brain researchers identified are neuroimaging in patients with implanted brain devices; integrating imaging data from multiple techniques; and developing models, theories and infrastructures for better understanding and analyzing brain data. In addition, the research community must ensure that students are prepared to use and create new imaging techniques and data.

The workshop chair, Bin He of the University of Minnesota-Twin Cities, said, "Noninvasive human brain mapping has been a holy grail in science. Accomplishing the three grand challenges would change the future of brain science and our ability to treat numerous brain disorders that cost the nation over $500 billion each year."

Engineers, in collaboration with neuroscientists, computer scientists and other researchers, are already at work devising creative ways to address these challenges.

The workshop findings place the new technique developed by the MIT and University of Vienna researchers into greater context. Their work had to overcome several of the challenges outlined.

The team captured neural activity in three dimensions at single-cell resolution by using a novel strategy not before applied to neurons--light-field microscopy, using a novel algorithm to reverse distortion, a process known as deconvolution.

The technique of light-field microscopy involves the shining of light at a 3-D sample, and capturing the locations of fluorophores in a still image, using a special set of lenses. The fluorophores in this case are modified proteins that attach to neuron and fluoresce when the neurons activate. However, this microscopy method requires a trade-off between the sample size and the spatial resolution possible, and thus it has not been before used for live biological imaging.

The advantage presented by light-field microscopy, here used in an optimized form, is that the technique may quickly capture the neuronal activity of whole animals, not simply still images, while providing high enough spatial resolution to make functional biological imaging possible.

"This elegant technique should have a large impact on the use of functional biological imaging for understanding brain cognitive function," said Leon Esterowitz, program director in NSF's Engineering Directorate, which provided partial funding for the research.

The researchers, led by Edward Boyden of MIT and Alipasha Vaziri of the University of Vienna, reported their results in this week's issue of the journal Nature Methods.

"Looking at the activity of just one neuron in the brain doesn't tell you how that information is being computed; for that, you need to know what upstream neurons are doing. And to understand what the activity of a given neuron means, you have to be able to see what downstream neurons are doing," said Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT and one of the leaders of the research team.

"In short, if you want to understand how information is being integrated from sensation all the way to action, you have to see the entire brain."

Creating new brain imaging techniques is one of today’s greatest engineering challenges.

The incentive for a good picture is big: looking at the brain helps us to understand how we move, how we think and how we learn. Recent advances in imaging enable us to see what the brain is doing more precisely across space and time and in more realistic conditions.

Researchers at the Massachusetts Institute of Technology and the University of Vienna achieved simultaneous functional imaging of all the neurons of the transparent roundworm C. elegans. This technique is the first that can generate 3-D movies of entire brains at the millisecond timescale.

The significance of this achievement becomes clear in light of the many engineering complexities associated with brain imaging techniques.

An imaging wish list

When 33 brain researchers put their minds together at a workshop funded by the National Science Foundation in August 2013, they identified three of the biggest challenges in mapping the human brain for better understanding, diagnosis and treatment.

Challenge one: High spatiotemporal resolution neuroimaging. Existing brain imaging technologies offer different advantages and disadvantages with respect to resolution. A method such as functional MRI that offers excellent spatial resolution (to several millimeters) can provide snapshots of brain activity in the order of seconds. Other methods, such as electroencephalography (EEG), provide precise information about brain activity over time (to the millisecond) but yield fuzzy information about the location.

The ability to conduct functional imaging of the brain, with high resolution in both space and time, would enable researchers to tease out some of the brain’s most intricate workings. For example, each half of the thalamus–the brain’s go-to structure for relaying sensory and motor information and a potential target for deep brain stimulation–has 13 functional areas in a package the size of a walnut.

With better spatial resolution, researchers would have an easier time determining which areas of the brain are involved in specific activities. This could ultimately help them identify more precise targets for stimulation, maximizing therapeutic benefits while minimizing unnecessary side effects.

In addition, researchers wish to combine data from different imaging techniques to study and model the brain at different levels, from molecules to cellular networks to the whole brain.

Challenge two: Perturbation-based neuroimaging. Much that we know about the brain relies on studies of dysfunction, when a problem such as a tumor or stroke affects a specific part of the brain and a correlating change in brain function can be observed.

But researchers also rely on techniques that temporarily ramp up, or turn off, brain activity in certain regions. What if the effects of such modifications on brain function could then be captured with neuroimaging techniques?

Being able to observe what happens when certain parts of the brain are activated could help researchers determine brain areas’ functions and provide critical guidance for brain therapies.

Challenge three: Neuroimaging in naturalistic environments. Researchers aim to create new noninvasive methods for imaging the brain while a person interacts with his or her surroundings. This ability will become more valuable as new technologies that interface with the brain are developed.

For example, a patient undergoing brain therapy at home may choose to send information to his or her physician remotely rather than go to an office for frequent check-ups. The engineering challenges of this scenario include the creation of low-cost, wearable technologies to monitor the brain as well as the technical capability to differentiate between signs of trouble and normal fluctuations in brain activity during daily routines.

Other challenges the brain researchers identified are neuroimaging in patients with implanted brain devices; integrating imaging data from multiple techniques; and developing models, theories and infrastructures for better understanding and analyzing brain data. In addition, the research community must ensure that students are prepared to use and create new imaging techniques and data.

The workshop chair, Bin He of the University of Minnesota-Twin Cities, said, “Noninvasive human brain mapping has been a holy grail in science. Accomplishing the three grand challenges would change the future of brain science and our ability to treat numerous brain disorders that cost the nation over $500 billion each year.”

Engineers, in collaboration with neuroscientists, computer scientists and other researchers, are already at work devising creative ways to address these challenges.

The workshop findings place the new technique developed by the MIT and University of Vienna researchers into greater context. Their work had to overcome several of the challenges outlined.

The team captured neural activity in three dimensions at single-cell resolution by using a novel strategy not before applied to neurons–light-field microscopy, using a novel algorithm to reverse distortion, a process known as deconvolution.

The technique of light-field microscopy involves the shining of light at a 3-D sample, and capturing the locations of fluorophores in a still image, using a special set of lenses. The fluorophores in this case are modified proteins that attach to neuron and fluoresce when the neurons activate. However, this microscopy method requires a trade-off between the sample size and the spatial resolution possible, and thus it has not been before used for live biological imaging.

The advantage presented by light-field microscopy, here used in an optimized form, is that the technique may quickly capture the neuronal activity of whole animals, not simply still images, while providing high enough spatial resolution to make functional biological imaging possible.

“This elegant technique should have a large impact on the use of functional biological imaging for understanding brain cognitive function,” said Leon Esterowitz, program director in NSF’s Engineering Directorate, which provided partial funding for the research.

The researchers, led by Edward Boyden of MIT and Alipasha Vaziri of the University of Vienna, reported their results in this week’s issue of the journal Nature Methods.

“Looking at the activity of just one neuron in the brain doesn’t tell you how that information is being computed; for that, you need to know what upstream neurons are doing. And to understand what the activity of a given neuron means, you have to be able to see what downstream neurons are doing,” said Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT and one of the leaders of the research team.

“In short, if you want to understand how information is being integrated from sensation all the way to action, you have to see the entire brain.”

NSF-funded researchers at Dartmouth College and the Desert Research Institute have found that a combination of rising temperatures and ash from Northern Hemisphere forest fires caused the large-scale surface melting of the Greenland ice sheet in 1889 and 2012.

The findings also suggest that continued climate warming will result in nearly annual melting of the ice sheet's surface by the year 2100. Melting in the dry snow region does not contribute to sea level rise; instead, the meltwater percolates into the snowpack and refreezes, causing less sunlight to be reflected--which scientists refer to as lower albedo--and leaving the ice-sheet surface even more susceptible to future melting.

"Forest fires burning far from Greenland provided the ash that, along with the warm temperatures, caused widespread melting of the Greenland Ice Sheet," said Kaitlin Keegan, the lead author of the paper, which appears today in the Proceedings of the National Academy of Sciences.

"It required the combination of both of these effects--lowered snow albedo from ash and unusually warm temperatures--to push the ice sheet over the threshold," said Keegan. "With both the frequency of forest fires and warmer temperatures predicted to increase with climate change, widespread melt events are likely to happen much more frequently in the future."

The study did not focus explicitly on analyzing the ash to determine the source of the fires, but the presence of a high concentration of ammonium concurrent with the black carbon indicates the ash's source was large boreal forest fires in Siberia and North America in June and July 2012. Air masses from these two areas arrived at the Greenland ice sheet's summit just before the widespread melt event.

As for 1889, there are historical records of testimony to Congress of large-scale forest fires in the Pacific Northwest of the United States that summer, but it would be difficult to pinpoint which forest fires deposited ash onto the ice sheet that summer.

The research was supported by portions of several NSF awards and by NASA grant NAG04GI66G.

The massive Greenland ice sheet experiences annual melting at low elevations near the coastline, but melting at the surface is rare in the dry snow region at higher elevations in its center. In mid-July 2012, however, more than 97 percent of the ice sheet experienced surface melt, the first widespread melt during the era of satellite observation.

The Dartmouth-led team's analysis of six Greenland shallow ice cores from the dry snow region confirmed that the most recent prior widespread melt occurred in 1889. An ice core, a cylinder of ice, from the center of the ice sheet demonstrated that exceptionally warm temperatures combined with black-carbon sediments from Northern Hemisphere forest fires reduced albedo below a critical threshold in the dry snow region and caused the large-scale melting events in both 1889 and 2012.

The researchers also used Intergovernmental Panel on Climate Change data to project the frequency of widespread surface melting into the year 2100.

If, as expected, Arctic temperatures and the frequency of forest fires increase with climate change, the researchers' results suggest that large-scale melt events on the Greenland ice sheet may begin to occur almost annually by the end of century. These events are likely to alter the surface mass-balance of the ice sheet, leaving the surface susceptible to further melting. The Greenland ice sheet is the second largest ice body in the world after the Antarctic ice sheet.

"Our Earth is a system of systems; improved understanding of the complexity of the linkages and feedbacks, as in this paper, is one challenge facing the next generation of engineers and scientists--people like Kaitlin," said Mary Albert, director of the NSF-supported Ice Drilling Program Office at the Thayer School of Engineering and Keegan's doctoral adviser.

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.

NSF-funded researchers at Dartmouth College and the Desert Research Institute have found that a combination of rising temperatures and ash from Northern Hemisphere forest fires caused the large-scale surface melting of the Greenland ice sheet in 1889 and 2012.

The findings also suggest that continued climate warming will result in nearly annual melting of the ice sheet’s surface by the year 2100. Melting in the dry snow region does not contribute to sea level rise; instead, the meltwater percolates into the snowpack and refreezes, causing less sunlight to be reflected–which scientists refer to as lower albedo–and leaving the ice-sheet surface even more susceptible to future melting.

“Forest fires burning far from Greenland provided the ash that, along with the warm temperatures, caused widespread melting of the Greenland Ice Sheet,” said Kaitlin Keegan, the lead author of the paper, which appears today in the Proceedings of the National Academy of Sciences.

“It required the combination of both of these effects–lowered snow albedo from ash and unusually warm temperatures–to push the ice sheet over the threshold,” said Keegan. “With both the frequency of forest fires and warmer temperatures predicted to increase with climate change, widespread melt events are likely to happen much more frequently in the future.”

The study did not focus explicitly on analyzing the ash to determine the source of the fires, but the presence of a high concentration of ammonium concurrent with the black carbon indicates the ash’s source was large boreal forest fires in Siberia and North America in June and July 2012. Air masses from these two areas arrived at the Greenland ice sheet’s summit just before the widespread melt event.

As for 1889, there are historical records of testimony to Congress of large-scale forest fires in the Pacific Northwest of the United States that summer, but it would be difficult to pinpoint which forest fires deposited ash onto the ice sheet that summer.

The research was supported by portions of several NSF awards and by NASA grant NAG04GI66G.

The massive Greenland ice sheet experiences annual melting at low elevations near the coastline, but melting at the surface is rare in the dry snow region at higher elevations in its center. In mid-July 2012, however, more than 97 percent of the ice sheet experienced surface melt, the first widespread melt during the era of satellite observation.

The Dartmouth-led team’s analysis of six Greenland shallow ice cores from the dry snow region confirmed that the most recent prior widespread melt occurred in 1889. An ice core, a cylinder of ice, from the center of the ice sheet demonstrated that exceptionally warm temperatures combined with black-carbon sediments from Northern Hemisphere forest fires reduced albedo below a critical threshold in the dry snow region and caused the large-scale melting events in both 1889 and 2012.

The researchers also used Intergovernmental Panel on Climate Change data to project the frequency of widespread surface melting into the year 2100.

If, as expected, Arctic temperatures and the frequency of forest fires increase with climate change, the researchers’ results suggest that large-scale melt events on the Greenland ice sheet may begin to occur almost annually by the end of century. These events are likely to alter the surface mass-balance of the ice sheet, leaving the surface susceptible to further melting. The Greenland ice sheet is the second largest ice body in the world after the Antarctic ice sheet.

“Our Earth is a system of systems; improved understanding of the complexity of the linkages and feedbacks, as in this paper, is one challenge facing the next generation of engineers and scientists–people like Kaitlin,” said Mary Albert, director of the NSF-supported Ice Drilling Program Office at the Thayer School of Engineering and Keegan’s doctoral adviser.

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.

Archimedes didn't invent the lever, but he explained the principles that underlie the tool. Likewise, the National Science Foundation (NSF) didn't originate the idea of using funding as a fulcrum, but with the Industry and University Cooperative Research (I/UCRC) program, the agency has discovered an effective model for financing emerging research areas at a relatively low cost.

Over the course of more than 40 years, by making small investments in dozens of I/UCRC centers, NSF has helped to kick off thousands of research projects with real-world applications.

From projects to improve hospital operations to efforts to develop robots for search-and-rescue operation to technologies to identify fraudulent fingerprints, I/UCRC supports a dizzying array of research agendas that are important to fundamental science and to industry and government at-large.

The accompanying slideshow provides examples of I/UCRC-supported centers around the world.

IMS is often put forward as a model I/UCRC center, having garnered more than $855 million in savings for industry since it launched in 2001, largely based on the vision and leadership of Jay Lee, the center director and founder. Since its inception, IMS has conducted over 100 projects in partnership with over 100 international organizations.

The 2012 NSF I/UCRC Economic Impact Study Report named IMS the number one ranked project in terms of its return on investment. For every dollar that NSF gave to the center, IMS was able to leverage industry support to provide a $270 return.

IMS's mission is to create maintenance systems that allow equipment to perform with near-zero breakdowns. Their goal is to transform the traditional "fail and fix" maintenance practices to "predict and prevent" ones by focusing on frontier technologies in embedded and remote monitoring, prognostics, and intelligent-decision support tools.

"We're talking about self-diagnostics, self-monitoring the health of a product so it tells people ‘my behavior is changing. I'm degrading. I need to be fixed early.'" Lee said. "That's the vision."

Over the years, IMS has helped industry leaders like GE, Proctor and Gamble and Toyota save millions by identifying problems in machinery before it breaks. A major consumer manufacturing company reportedly saved $415 million per year worldwide by incorporating technologies and tools developed at IMS.

IMS recently created software to monitor sensor degradation ("sensors for sensors") and developed the WatchDog Agent Prognostics Toolkit to add prognostics and health management algorithms to existing medical software.

"We create a lot of new knowledge that people don't think about and then often turn them into software that we develop and trademark, like Watchdog Agent," Lee said. "Everyone understands a watchdog. When the dog barks, it means a stranger has come to the house. Well, we created this software that you can put on the server or the network and it can bark if something goes wrong."

National and Global Impact

IMS is not unique in its longevity and success. Rather, it is characteristic of the kind of disruptive, translational research the I/UCRC program champions.

"I/UCRC constitutes a network of over 3,000 people, comprising a vibrant research innovation ecosystem that is really connecting industry with academia," said Larry Hornak, a program officer at NSF.

The program currently supports 67 centers in 42 states and four countries (Belgium, Germany, Russia, and India). In 2013, I/UCRCs were responsible for more than 1,400 publications, and they spun off half a dozen start-up companies.

"This is the most efficient use of federal research money that we've ever found," said Christopher Miles, Biometrics Program manager at the Department of Homeland Security (DHS) and a member of the I/UCRC-funded Center for Identity Technology Research (CITeR) where a $40,000 membership by DHS leverages over $1 million in funding each year.

Among the research topics addressed by CITeR: an automated screening system to assess the credibility of airport travelers, and new methods to detect when individuals are using fake or altered fingerprints in biometric devices.

"We're also working with industry partners in a way that doesn't exist anywhere else," Miles said.

NSF provides annual grants of $60,000 and $90,000 dollars to the 67 centers, depending on the age and maturity of the center. These funds are matched many times over by investments from industry, other government agencies and the participating universities themselves. On average, each center has 18 dues-paying members. These dues multiply the funding for research and educational efforts that are of interest to all parties.

In 2013, NSF invested $17.8 million in the program - $8 million from the CISE directorate and $9.8 million contributed by the Engineering directorate. According to the most recent NSF economic studies, every dollar invested by the agency is supplemented by $8 from center members.

At the annual meeting of I/UCRCs in Washington, D.C. in January, program leaders and participants spoke about the significance of the program as a source of great ideas and as a model for multi-institutional collaborations.

"Our goal with the program is to determine effective ways of stimulating non-federal investment in R&D while improving the application of R&D results," Hornak said.

But the program isn't only about research and development activities. I/UCRC center research typically engenders three outcomes that are of interest to industry: ideas, products and people.

"There may be results that could be very interesting or useful to industry, or tools like software that we develop that are useful and that companies can take and try, or students that they can hire that will eventually become their workforce," Lee said. "It's a three-legged stool of value creation."

In his remarks at the Annual Meeting, Hornak cited the educational component as a significant aspect of the program.

"Students are the heart of our I/UCRCs," he said. "They're the true lasting legacy of our centers and they're the cornerstone of the program. We enhance intellectual capacity through the integration of research and education."

Byron Gillespie, Intel's engineering director of performance measurement and analysis and chair of the internal advisory board for the Center for Embedded Systems, echoed this sentiment.

"We get really good interns at Intel because of our involvement with the I/UCRC program," Gillespie said. "The researchers know who their good students are and we need their good students to be interns at our corporations."

80 to 90 percent of each center's funding supports students at all levels, from undergraduates to post-doctoral researchers, and these opportunities often lead to jobs. In the case of IMS, member companies recruited 85 percent of students who worked on I/UCRC research projects over the years.

Back at the annual meeting, researchers shared best practices for running a center, as well as examples of impactful research. The meeting was accompanied by the release of a compendium of research results from the past year that had been identified by industry as those with the greatest transformative potential and economic value.

Based on her experience running the I/UCRC program, Rita Rodriguez, an NSF program officer from CISE, exhorted participants to consider outside-the-box approaches to society's most pressing problems.

"Think big, think crazy," she said, "for tomorrow the insanity may become gold."

--

An I/UCRC compendium of industry-nominated highlights is available online.

Archimedes didn’t invent the lever, but he explained the principles that underlie the tool. Likewise, the National Science Foundation (NSF) didn’t originate the idea of using funding as a fulcrum, but with the Industry and University Cooperative Research (I/UCRC) program, the agency has discovered an effective model for financing emerging research areas at a relatively low cost.

Over the course of more than 40 years, by making small investments in dozens of I/UCRC centers, NSF has helped to kick off thousands of research projects with real-world applications.

From projects to improve hospital operations to efforts to develop robots for search-and-rescue operation to technologies to identify fraudulent fingerprints, I/UCRC supports a dizzying array of research agendas that are important to fundamental science and to industry and government at-large.

The accompanying slideshow provides examples of I/UCRC-supported centers around the world.

IMS is often put forward as a model I/UCRC center, having garnered more than $855 million in savings for industry since it launched in 2001, largely based on the vision and leadership of Jay Lee, the center director and founder. Since its inception, IMS has conducted over 100 projects in partnership with over 100 international organizations.

The 2012 NSF I/UCRC Economic Impact Study Report named IMS the number one ranked project in terms of its return on investment. For every dollar that NSF gave to the center, IMS was able to leverage industry support to provide a $270 return.

IMS’s mission is to create maintenance systems that allow equipment to perform with near-zero breakdowns. Their goal is to transform the traditional “fail and fix” maintenance practices to “predict and prevent” ones by focusing on frontier technologies in embedded and remote monitoring, prognostics, and intelligent-decision support tools.

“We’re talking about self-diagnostics, self-monitoring the health of a product so it tells people ‘my behavior is changing. I’m degrading. I need to be fixed early.'” Lee said. “That’s the vision.”

Over the years, IMS has helped industry leaders like GE, Proctor and Gamble and Toyota save millions by identifying problems in machinery before it breaks. A major consumer manufacturing company reportedly saved $415 million per year worldwide by incorporating technologies and tools developed at IMS.

IMS recently created software to monitor sensor degradation (“sensors for sensors”) and developed the WatchDog Agent Prognostics Toolkit to add prognostics and health management algorithms to existing medical software.

“We create a lot of new knowledge that people don’t think about and then often turn them into software that we develop and trademark, like Watchdog Agent,” Lee said. “Everyone understands a watchdog. When the dog barks, it means a stranger has come to the house. Well, we created this software that you can put on the server or the network and it can bark if something goes wrong.”

National and Global Impact

IMS is not unique in its longevity and success. Rather, it is characteristic of the kind of disruptive, translational research the I/UCRC program champions.

“I/UCRC constitutes a network of over 3,000 people, comprising a vibrant research innovation ecosystem that is really connecting industry with academia,” said Larry Hornak, a program officer at NSF.

The program currently supports 67 centers in 42 states and four countries (Belgium, Germany, Russia, and India). In 2013, I/UCRCs were responsible for more than 1,400 publications, and they spun off half a dozen start-up companies.

“This is the most efficient use of federal research money that we’ve ever found,” said Christopher Miles, Biometrics Program manager at the Department of Homeland Security (DHS) and a member of the I/UCRC-funded Center for Identity Technology Research (CITeR) where a $40,000 membership by DHS leverages over $1 million in funding each year.

Among the research topics addressed by CITeR: an automated screening system to assess the credibility of airport travelers, and new methods to detect when individuals are using fake or altered fingerprints in biometric devices.

“We’re also working with industry partners in a way that doesn’t exist anywhere else,” Miles said.

NSF provides annual grants of $60,000 and $90,000 dollars to the 67 centers, depending on the age and maturity of the center. These funds are matched many times over by investments from industry, other government agencies and the participating universities themselves. On average, each center has 18 dues-paying members. These dues multiply the funding for research and educational efforts that are of interest to all parties.

In 2013, NSF invested $17.8 million in the program – $8 million from the CISE directorate and $9.8 million contributed by the Engineering directorate. According to the most recent NSF economic studies, every dollar invested by the agency is supplemented by $8 from center members.

At the annual meeting of I/UCRCs in Washington, D.C. in January, program leaders and participants spoke about the significance of the program as a source of great ideas and as a model for multi-institutional collaborations.

“Our goal with the program is to determine effective ways of stimulating non-federal investment in R&D while improving the application of R&D results,” Hornak said.

But the program isn’t only about research and development activities. I/UCRC center research typically engenders three outcomes that are of interest to industry: ideas, products and people.

“There may be results that could be very interesting or useful to industry, or tools like software that we develop that are useful and that companies can take and try, or students that they can hire that will eventually become their workforce,” Lee said. “It’s a three-legged stool of value creation.”

In his remarks at the Annual Meeting, Hornak cited the educational component as a significant aspect of the program.

“Students are the heart of our I/UCRCs,” he said. “They’re the true lasting legacy of our centers and they’re the cornerstone of the program. We enhance intellectual capacity through the integration of research and education.”

Byron Gillespie, Intel’s engineering director of performance measurement and analysis and chair of the internal advisory board for the Center for Embedded Systems, echoed this sentiment.

“We get really good interns at Intel because of our involvement with the I/UCRC program,” Gillespie said. “The researchers know who their good students are and we need their good students to be interns at our corporations.”

80 to 90 percent of each center’s funding supports students at all levels, from undergraduates to post-doctoral researchers, and these opportunities often lead to jobs. In the case of IMS, member companies recruited 85 percent of students who worked on I/UCRC research projects over the years.

Back at the annual meeting, researchers shared best practices for running a center, as well as examples of impactful research. The meeting was accompanied by the release of a compendium of research results from the past year that had been identified by industry as those with the greatest transformative potential and economic value.

Based on her experience running the I/UCRC program, Rita Rodriguez, an NSF program officer from CISE, exhorted participants to consider outside-the-box approaches to society’s most pressing problems.

“Think big, think crazy,” she said, “for tomorrow the insanity may become gold.”

–

An I/UCRC compendium of industry-nominated highlights is available online.

]]>http://blog.thesietch.org/2014/05/19/funding-as-fulcrum/feed/0Check out the assembly line of the future!http://blog.thesietch.org/2014/05/18/check-out-the-assembly-line-of-the-future/
http://blog.thesietch.org/2014/05/18/check-out-the-assembly-line-of-the-future/#commentsMon, 19 May 2014 04:00:00 +0000http://www.nsf.gov/news/special_reports/science_nation/nanomanufacturing.jsp?WT.mc_id=USNSF_51

There’s no shortage of ideas about how to use nanotechnology, but one of the major hurdles is how to manufacture some of the new products on a large scale. With support from the National Science Foundation (NSF), University of Massachusetts (UMass) Amherst chemical engineer Jim Watkins and his team are working to make nanotechnology more practical for industrial-scale manufacturing.

One of the projects they’re working on at the NSF Center for Hierarchical Manufacturing (CHM) is a roll-to-roll process for nanotechnology that is similar to what is used in traditional manufacturing. They’re also designing a process to manufacture printable coatings that improve the way solar panels absorb and direct light. They’re even investigating the use of self-assembling nanoscale products that could have applications for many industries.

“New nanotechnologies can’t impact the U.S. economy until practical methods are available for producing products, using them in high volumes, at low cost. CHM is researching the fundamental scientific and engineering barriers that impede such commercialization, and innovating new technologies to surmount those barriers,” notes Bruce Kramer, senior advisor in the NSF Engineering Directorate’s Division of Civil, Mechanical and Manufacturing Innovation (CMMI), which funded the research.

“The NSF Center for Hierarchical Manufacturing is developing platform technologies for the economical manufacture of next generation devices and systems for applications in computing, electronics, energy conversion, resource conservation and human health,” explains Khershed Cooper, a CMMI program director.

“The center creates fabrication tools that are enabling versatile and high-rate continuous processes for the manufacture of nanostructures that are systematically integrated into higher order structures using bottom-up and top-down techniques,” Cooper says. “For example, CHM is designing and building continuous, roll-to-roll nanofabrication systems that can print, in high-volume, 3-D nanostructures and multi-layer nanodevices at sub-100 nanometer resolution, and in the process, realize hybrid electronic-optical-mechanical nanosystems.”

The research in this episode was supported by NSF award #1025020, Nanoscale Science and Engineering Centers (NSEC): Center for Hierarchical Manufacturing.

]]>http://blog.thesietch.org/2014/05/18/check-out-the-assembly-line-of-the-future/feed/0Catching HIV budding from cells: it all comes down to ALIXhttp://blog.thesietch.org/2014/05/16/catching-hiv-budding-from-cells-it-all-comes-down-to-alix/
http://blog.thesietch.org/2014/05/16/catching-hiv-budding-from-cells-it-all-comes-down-to-alix/#commentsFri, 16 May 2014 09:41:00 +0000http://blog.thesietch.org/?guid=9265ac637411b59710860725d1f030a6DiscoveryCatching HIV budding from cells: it all comes down to ALIX

The secrets of the AIDS virus may all come down to a protein named ALIX.

Researchers have devised a way to watch newly forming AIDS particles emerging or "budding" from infected human cells without interfering with the process.

The method shows that a protein named ALIX (which stands for "alg-2 interacting protein x") gets involved during the final stages of virus replication, not early on, as was believed. ALIX assists in separating new virus buds from a cell. These buds repeat the replication process and further infect their host.

"We watch one cell at a time" and use a digital camera and special microscope to make movies and photos of the budding process, says virologist Saveez Saffarian, a scientist at the University of Utah, and co-author of a paper on HIV budding published this week in the journal PLOS ONE.

"We saw ALIX recruited into HIV budding for the first time," he says. "Everybody knew that ALIX was involved in HIV budding, but nobody could visualize the recruitment of ALIX into the process."

The finding has no immediate clinical significance for AIDS patients because ALIX is involved in too many critical functions like cell division to be a likely target for new medications, Saffarian says.

"We know a lot about the proteins that help HIV get out of the cell, but we don't know how they come together to help the virus emerge," he says. "In the next 10 to 20 years, we will know a lot more about this mechanism."

Saffarian conducted the research with the paper's first author Pei-I Ku, as well as researchers Mourad Bendjennat, Jeff Ballew and Michael Landesman. All are with the University of Utah.

The research was funded by the National Science Foundation (NSF).

"This project has led to the development of an important technique in basic research in cell biology and virology," says Parag Chitnis, director of NSF's Division of Molecular and Cellular Biosciences.

"It's uncovering a new understanding of the viruses involved in human diseases," says Chitnis. "This is an excellent example of how purely basic research can lead to the fundamental understanding of topics of societal need."

Watch, don't touch, as HIV buds

Biochemical methods used for years involve collecting millions of viruses in lab glassware and conducting analyses to reveal the proteins that make up the virus--for example, by using antibodies that bind to certain proteins and using other proteins to make the first proteins fluoresce so they can be seen.

"You're not doing it one virus at a time," Saffarian says. "The problem is that you don't see the differences among similar viruses. And you don't see the timing of how various proteins come and go to help the virus get out of the cell."

Other methods freeze or otherwise fix cells as new HIV particles emerge, and use an electron microscope to photograph freeze-frame views of viral replication.

Saffarian employs technology known as "total internal reflection fluorescence microscopy" that looks at the dynamic processes in cells.

The method has been used to make images of the budding of HIV and a similar horse virus, EIAV.

But Saffarian says that the EIAV study didn't show ALIX becoming involved in HIV budding, and that it wrongly indicated that ALIX got involved early in the EIAV budding process, suggesting it did the same in HIV budding.

Ku, Saffarian and colleagues combined their microscopy method with an improved way of genetically linking a green fluorescent "label" to ALIX proteins in cloned cells so they could see the proteins without harming their normal function.

The researchers tried numerous so-called "linkers" and found the one that let them see the ALIX proteins as they became involved in HIV budding.

Neither the microscope technology nor labeling proteins with green fluorescence are new, but "what we did that is new is to connect these fluorescence proteins to ALIX using many different kinds of linkers," says Saffarian, to find one that let the ALIX protein function properly.

The problem with research that indicated ALIX was involved early in the budding process was that only one linker was used, and it impaired ALIX's normal function, the scientists say.

Looking at proteins forming HIV

When HIV replicates inside a human cell, a protein named Gag makes up most of the new particles--there are 4,000 copies of the Gag protein in one HIV particle--although other proteins get involved in the process, including ALIX.

Experiments like those by Saffarian use "virus-like particles," which are HIV particles stripped of their genetic blueprint or genome so they don't pose an infection risk in the lab.

"Virus-like particles maintain the same geometry and same budding process as infectious HIV," Saffarian says.

During budding, Gag proteins assemble on the inside of a cell membrane--along with ALIX in the late stages--and form a new HIV particle that pushes its way out of the cell--the process by which AIDS in an infected person spreads from cell to cell.

To look at the budding process, Ku and Saffarian placed human cells containing the particles in a small amount of liquid growth medium in a petri dish and placed it under the microscope, which is in a glass chamber kept at body temperature so the cells can remain alive for more than 48 hours.

A solid-state blue laser was aimed at the sample to make the green-labeled ALIX and red-labeled Gag proteins glow or fluoresce so they could be seen as they assembled into a virus particle.

With red-labeled Gag proteins and green-labeled ALIX proteins, "we could see ALIX come in at the end of the assembly of the virus particle," says Saffarian. Some 100 ALIX proteins converged with the roughly 4,000 Gag molecules and assembled into a new HIV particle.

Enter ALIX

ALIX then brought in two other proteins, which cut off the budding virus particle from the cell when it emerged. ALIX's position during the pinching off of new particles hadn't been recognized before.

The researchers watched the virus particles bud one cell at a time: about 100 particles emerged during a two-hour period. Most of the ALIX proteins left when HIV assembly was complete and returned to the liquid inside a cell.

Saffarian says the discovery that ALIX doesn't get involved until the late stages of HIV budding suggests the existence of a previously unrecognized mechanism that regulates the timing of ALIX and other proteins in assembling new HIV particles.

"We discovered that the cellular components that help with the release of the virus arrive in a much more complex timing scheme than predicted based on the biochemical data," he says.

"The outcome of this study is promising because it uncovers a new regulatory mechanism for recruitment of cellular components to HIV budding sites, and opens the door to exciting future studies on the mechanism of HIV budding."

The secrets of the AIDS virus may all come down to a protein named ALIX.

Researchers have devised a way to watch newly forming AIDS particles emerging or “budding” from infected human cells without interfering with the process.

The method shows that a protein named ALIX (which stands for “alg-2 interacting protein x”) gets involved during the final stages of virus replication, not early on, as was believed. ALIX assists in separating new virus buds from a cell. These buds repeat the replication process and further infect their host.

“We watch one cell at a time” and use a digital camera and special microscope to make movies and photos of the budding process, says virologist Saveez Saffarian, a scientist at the University of Utah, and co-author of a paper on HIV budding published this week in the journal PLOS ONE.

“We saw ALIX recruited into HIV budding for the first time,” he says. “Everybody knew that ALIX was involved in HIV budding, but nobody could visualize the recruitment of ALIX into the process.”

The finding has no immediate clinical significance for AIDS patients because ALIX is involved in too many critical functions like cell division to be a likely target for new medications, Saffarian says.

“We know a lot about the proteins that help HIV get out of the cell, but we don’t know how they come together to help the virus emerge,” he says. “In the next 10 to 20 years, we will know a lot more about this mechanism.”

Saffarian conducted the research with the paper’s first author Pei-I Ku, as well as researchers Mourad Bendjennat, Jeff Ballew and Michael Landesman. All are with the University of Utah.

The research was funded by the National Science Foundation (NSF).

“This project has led to the development of an important technique in basic research in cell biology and virology,” says Parag Chitnis, director of NSF’s Division of Molecular and Cellular Biosciences.

“It’s uncovering a new understanding of the viruses involved in human diseases,” says Chitnis. “This is an excellent example of how purely basic research can lead to the fundamental understanding of topics of societal need.”

Watch, don’t touch, as HIV buds

Biochemical methods used for years involve collecting millions of viruses in lab glassware and conducting analyses to reveal the proteins that make up the virus–for example, by using antibodies that bind to certain proteins and using other proteins to make the first proteins fluoresce so they can be seen.

“You’re not doing it one virus at a time,” Saffarian says. “The problem is that you don’t see the differences among similar viruses. And you don’t see the timing of how various proteins come and go to help the virus get out of the cell.”

Other methods freeze or otherwise fix cells as new HIV particles emerge, and use an electron microscope to photograph freeze-frame views of viral replication.

Saffarian employs technology known as “total internal reflection fluorescence microscopy” that looks at the dynamic processes in cells.

The method has been used to make images of the budding of HIV and a similar horse virus, EIAV.

But Saffarian says that the EIAV study didn’t show ALIX becoming involved in HIV budding, and that it wrongly indicated that ALIX got involved early in the EIAV budding process, suggesting it did the same in HIV budding.

Ku, Saffarian and colleagues combined their microscopy method with an improved way of genetically linking a green fluorescent “label” to ALIX proteins in cloned cells so they could see the proteins without harming their normal function.

The researchers tried numerous so-called “linkers” and found the one that let them see the ALIX proteins as they became involved in HIV budding.

Neither the microscope technology nor labeling proteins with green fluorescence are new, but “what we did that is new is to connect these fluorescence proteins to ALIX using many different kinds of linkers,” says Saffarian, to find one that let the ALIX protein function properly.

The problem with research that indicated ALIX was involved early in the budding process was that only one linker was used, and it impaired ALIX’s normal function, the scientists say.

Looking at proteins forming HIV

When HIV replicates inside a human cell, a protein named Gag makes up most of the new particles–there are 4,000 copies of the Gag protein in one HIV particle–although other proteins get involved in the process, including ALIX.

Experiments like those by Saffarian use “virus-like particles,” which are HIV particles stripped of their genetic blueprint or genome so they don’t pose an infection risk in the lab.

“Virus-like particles maintain the same geometry and same budding process as infectious HIV,” Saffarian says.

During budding, Gag proteins assemble on the inside of a cell membrane–along with ALIX in the late stages–and form a new HIV particle that pushes its way out of the cell–the process by which AIDS in an infected person spreads from cell to cell.

To look at the budding process, Ku and Saffarian placed human cells containing the particles in a small amount of liquid growth medium in a petri dish and placed it under the microscope, which is in a glass chamber kept at body temperature so the cells can remain alive for more than 48 hours.

A solid-state blue laser was aimed at the sample to make the green-labeled ALIX and red-labeled Gag proteins glow or fluoresce so they could be seen as they assembled into a virus particle.

With red-labeled Gag proteins and green-labeled ALIX proteins, “we could see ALIX come in at the end of the assembly of the virus particle,” says Saffarian. Some 100 ALIX proteins converged with the roughly 4,000 Gag molecules and assembled into a new HIV particle.

Enter ALIX

ALIX then brought in two other proteins, which cut off the budding virus particle from the cell when it emerged. ALIX’s position during the pinching off of new particles hadn’t been recognized before.

The researchers watched the virus particles bud one cell at a time: about 100 particles emerged during a two-hour period. Most of the ALIX proteins left when HIV assembly was complete and returned to the liquid inside a cell.

Saffarian says the discovery that ALIX doesn’t get involved until the late stages of HIV budding suggests the existence of a previously unrecognized mechanism that regulates the timing of ALIX and other proteins in assembling new HIV particles.

“We discovered that the cellular components that help with the release of the virus arrive in a much more complex timing scheme than predicted based on the biochemical data,” he says.

“The outcome of this study is promising because it uncovers a new regulatory mechanism for recruitment of cellular components to HIV budding sites, and opens the door to exciting future studies on the mechanism of HIV budding.”

To find out how to steer clear of Lyme disease during "picnic season" - a time when people are more likely to pick up ticks - the National Science Foundation spoke with NSF-funded disease ecologist Rick Ostfeld of the Cary Institute of Ecosystem Studies in Millbrook, N.Y., and program director Sam Scheiner of NSF's Division of Environmental Biology.

Ostfeld's research is funded by the joint NSF-NIH Ecology and Evolution of Infectious Diseases Program and NSF's Long-Term Research in Environmental Biology Program.

1)What have we learned about how Lyme disease is transmitted?

Lyme disease can develop when someone is bitten by a blacklegged tick infected with a virulent strain of the bacterium Borrelia burgdorferi. At least 15 strains of the bacterium are found in ticks, but only a few turn up in Lyme disease patients, says Ostfeld.

Newly hatched larval ticks are born without the Lyme bacterium. They may acquire it, however, if they feast on a blood meal from an infected host. Scientists have learned that white-footed mice, eastern chipmunks and short-tailed shrews can transfer the Lyme bacterium to larval ticks.

Tick nymphs infected with Lyme bacteria pose the biggest threat to humans; their numbers are linked with the size of mouse populations.

People in the Northeast, Mid-Atlantic, and Midwest have experienced waves of "new" tick-borne diseases. It started in the 1980s with Lyme disease. Then in the 1990s it was anaplasmosis, followed in the early 2000s by babesiosis. Now we may be seeing the emergence of Borrelia miyamotoi, says Ostfeld.

The pathogens are transmitted by blacklegged ticks. "We suspect that they were present for decades in isolated geographic areas, but we're working to understand what's triggering their spread," says Ostfeld. For example, while Lyme disease bacteria can be carried long distances by birds, Anaplasma and Babesia don't fare well in birds.

3) Howdo small mammals play a part?

Mice, chipmunks and shrews play a major role in infecting blacklegged ticks with the pathogens that cause Lyme disease, anaplasmosis, and babesiosis. Ticks feeding on these animals can acquire two or even all three pathogens from a single bloodmeal, says Ostfeld.

Health care providers need to be aware, he says, "that patients with Lyme disease may be co-infected with anaplasmosis and babesiosis, which will affect symptoms, treatments, and possibly outcomes. The good news is that by regulating these small mammals, we can reduce our risk of exposure to all three illnesses."

4)How are predators like foxes protecting us against diseases such as Lyme?

Some predators appear to be protecting our health by regulating small mammals, Ostfeld says. Research suggests that where red foxes are abundant, there is a lower incidence of Lyme disease in the human population.

5)How is climate change influencing the spread of tick-borne illnesses?

The northward and westward spread of blacklegged ticks and Lyme disease in recent decades is caused in part by climate warming, says Ostfeld. However, Lyme disease has also been spreading south, which is unlikely to be caused by climate change, scientists believe.

Models predict that Lyme disease will continue to move to higher latitudes and elevations over coming decades, a result of milder winters and longer growing seasons. "We're currently exploring how climate warming affects the seasonal timing of host-seeking and biting behavior of ticks," says Ostfeld.

6)Why are we more likely to contract Lyme disease in fragmented forests?

"When humans fragment forests, often through urbanization, we create conditions that favor the small mammals that amplify Lyme disease," Ostfeld says.

The species consistently found in forest sites, no matter how small or isolated, is the white-footed mouse. And lyme-infected ticks are often most abundant in the smallest forest patches, leading to a high risk of human exposure.

"To combat Lyme disease, one of the fastest growing threats to human health in the U.S., we need to know where it is, how it's transmitted, and how it can be controlled," says Scheiner.

"Long-term studies, such as work by Ostfeld and colleagues, show that the abundance of the disease-causing bacteria is determined by the number and variety of small mammals in a community. The research also demonstrates the value of conserving biodiversity as a way of limiting the spread of disease."

7) Aren't mice affected by ticks?

Long-term monitoring of mice and ticks in upstate New York shows that mice survive just as well when they're infested with hundreds of ticks as when they have few or no ticks. In fact, male mice survive longer when they have more ticks, Ostfeld says.

"This is bad news, as it means that heavy tick loads won't bring down mouse numbers, which would have helped reduce the human risk of tick-borne diseases."

Tick-borne disease takes a huge toll on public health and on the economy, says Ostfeld. "Take the case of Lyme disease, where diagnosis and treatment remain controversial. One thing that everyone can agree on is the importance of preventing exposure. Doing this requires understanding the ecology of ticks, pathogens and hosts."

The more we know about where and when the risk is high, he says, the better we'll be able to protect ourselves and respond appropriately when we're exposed.

9)What precautions might be wise for people wishing to spend time outside?

"I'd recommend the use of tick repellents on skin or clothes, paying special attention to shoes and socks," Ostfeld says. "Tick nymphs seek hosts on or just above the ground, so shoes and socks are the first line of defense." Some studies show that daily tick checks during late spring and early summer can be protective.

Knowing the early symptoms of Lyme disease - fever, chills, muscle aches, often a large rash - is important. "People who live in the heaviest Lyme disease zones of the Northeast, Mid-Atlantic, and Upper Midwest," says Ostfeld, "and who start feeling flu-like symptoms, especially from May through July, should ask their doctors to consider Lyme disease."

10) Does this mean that we should stay inside so we don't risk becoming infected?

The likelihood of contracting Lyme disease is very low overall, says Scheiner, "and is even lower if you take reasonable precautions. Don't let the threat of Lyme disease keep you from enjoying the best part of spring and summer: the great outdoors."

To find out how to steer clear of Lyme disease during “picnic season” – a time when people are more likely to pick up ticks – the National Science Foundation spoke with NSF-funded disease ecologist Rick Ostfeld of the Cary Institute of Ecosystem Studies in Millbrook, N.Y., and program director Sam Scheiner of NSF’s Division of Environmental Biology.

Ostfeld’s research is funded by the joint NSF-NIH Ecology and Evolution of Infectious Diseases Program and NSF’s Long-Term Research in Environmental Biology Program.

1)What have we learned about how Lyme disease is transmitted?

Lyme disease can develop when someone is bitten by a blacklegged tick infected with a virulent strain of the bacterium Borrelia burgdorferi. At least 15 strains of the bacterium are found in ticks, but only a few turn up in Lyme disease patients, says Ostfeld.

Newly hatched larval ticks are born without the Lyme bacterium. They may acquire it, however, if they feast on a blood meal from an infected host. Scientists have learned that white-footed mice, eastern chipmunks and short-tailed shrews can transfer the Lyme bacterium to larval ticks.

Tick nymphs infected with Lyme bacteria pose the biggest threat to humans; their numbers are linked with the size of mouse populations.

People in the Northeast, Mid-Atlantic, and Midwest have experienced waves of “new” tick-borne diseases. It started in the 1980s with Lyme disease. Then in the 1990s it was anaplasmosis, followed in the early 2000s by babesiosis. Now we may be seeing the emergence of Borrelia miyamotoi, says Ostfeld.

The pathogens are transmitted by blacklegged ticks. “We suspect that they were present for decades in isolated geographic areas, but we’re working to understand what’s triggering their spread,” says Ostfeld. For example, while Lyme disease bacteria can be carried long distances by birds, Anaplasma and Babesia don’t fare well in birds.

3) Howdo small mammals play a part?

Mice, chipmunks and shrews play a major role in infecting blacklegged ticks with the pathogens that cause Lyme disease, anaplasmosis, and babesiosis. Ticks feeding on these animals can acquire two or even all three pathogens from a single bloodmeal, says Ostfeld.

Health care providers need to be aware, he says, “that patients with Lyme disease may be co-infected with anaplasmosis and babesiosis, which will affect symptoms, treatments, and possibly outcomes. The good news is that by regulating these small mammals, we can reduce our risk of exposure to all three illnesses.”

4)How are predators like foxes protecting us against diseases such as Lyme?

Some predators appear to be protecting our health by regulating small mammals, Ostfeld says. Research suggests that where red foxes are abundant, there is a lower incidence of Lyme disease in the human population.

5)How is climate change influencing the spread of tick-borne illnesses?

The northward and westward spread of blacklegged ticks and Lyme disease in recent decades is caused in part by climate warming, says Ostfeld. However, Lyme disease has also been spreading south, which is unlikely to be caused by climate change, scientists believe.

Models predict that Lyme disease will continue to move to higher latitudes and elevations over coming decades, a result of milder winters and longer growing seasons. “We’re currently exploring how climate warming affects the seasonal timing of host-seeking and biting behavior of ticks,” says Ostfeld.

6)Why are we more likely to contract Lyme disease in fragmented forests?

“When humans fragment forests, often through urbanization, we create conditions that favor the small mammals that amplify Lyme disease,” Ostfeld says.

The species consistently found in forest sites, no matter how small or isolated, is the white-footed mouse. And lyme-infected ticks are often most abundant in the smallest forest patches, leading to a high risk of human exposure.

“To combat Lyme disease, one of the fastest growing threats to human health in the U.S., we need to know where it is, how it’s transmitted, and how it can be controlled,” says Scheiner.

“Long-term studies, such as work by Ostfeld and colleagues, show that the abundance of the disease-causing bacteria is determined by the number and variety of small mammals in a community. The research also demonstrates the value of conserving biodiversity as a way of limiting the spread of disease.”

7) Aren’t mice affected by ticks?

Long-term monitoring of mice and ticks in upstate New York shows that mice survive just as well when they’re infested with hundreds of ticks as when they have few or no ticks. In fact, male mice survive longer when they have more ticks, Ostfeld says.

“This is bad news, as it means that heavy tick loads won’t bring down mouse numbers, which would have helped reduce the human risk of tick-borne diseases.”

Tick-borne disease takes a huge toll on public health and on the economy, says Ostfeld. “Take the case of Lyme disease, where diagnosis and treatment remain controversial. One thing that everyone can agree on is the importance of preventing exposure. Doing this requires understanding the ecology of ticks, pathogens and hosts.”

The more we know about where and when the risk is high, he says, the better we’ll be able to protect ourselves and respond appropriately when we’re exposed.

9)What precautions might be wise for people wishing to spend time outside?

“I’d recommend the use of tick repellents on skin or clothes, paying special attention to shoes and socks,” Ostfeld says. “Tick nymphs seek hosts on or just above the ground, so shoes and socks are the first line of defense.” Some studies show that daily tick checks during late spring and early summer can be protective.

Knowing the early symptoms of Lyme disease – fever, chills, muscle aches, often a large rash – is important. “People who live in the heaviest Lyme disease zones of the Northeast, Mid-Atlantic, and Upper Midwest,” says Ostfeld, “and who start feeling flu-like symptoms, especially from May through July, should ask their doctors to consider Lyme disease.”

10) Does this mean that we should stay inside so we don’t risk becoming infected?

The likelihood of contracting Lyme disease is very low overall, says Scheiner, “and is even lower if you take reasonable precautions. Don’t let the threat of Lyme disease keep you from enjoying the best part of spring and summer: the great outdoors.”

]]>http://blog.thesietch.org/2014/05/16/lyme-disease-ten-things-you-always-wanted-to-know-about-ticks/feed/0New data show how states are doing in sciencehttp://blog.thesietch.org/2014/05/15/new-data-show-how-states-are-doing-in-science/
http://blog.thesietch.org/2014/05/15/new-data-show-how-states-are-doing-in-science/#commentsThu, 15 May 2014 17:00:00 +0000http://blog.thesietch.org/?guid=c3224e84ea4f53ca468aaa1a91ca1e2f

The newly updated, online, interactive state data tool allows policymakers, educators and other users to discern trends in education, science and research in each of the 50 states. This free resource supplements the state data in the 2014 Science and Engineering Indicators report, the premier source of information and analysis of the nation's position in science and engineering education and research. The biennial report is published by the National Science Board, the policy making body of the National Science Foundation (NSF).

The tool features 59 state indicators of state performance in education, the scientific workforce, research and development (R&D) investments and activities, and high-tech business. It offers tables, charts and graphs, and permits users to view and customize data in multiple ways, such as making comparisons with other states, looking at 20 year trends, and translating financial information from current into constant dollars.

"R&D and human capital are major drivers of innovation and the economy," said Dan Arvizu, chairman of the National Science Board. "This is a valuable resource for those who wish to see how their state is doing. Whether it's educational achievement, your state's workforce, or R&D investments, it's an excellent tool to see how your state stacks up. And it will inform debates over state policies and programs."

Arvizu said the tool is an especially valuable resource for educators and state policymakers in understanding their state's educational landscape and for corporations and economic development officials interested in a state's workforce or technology-based business potential.

The state data tool includes indicators on:

State policymakers and other users can consider such factors as how their state compares with neighboring or similar states, as well as with the national average. They can see whether their state is following national trends, such as conducting more R&D over the last decade, or moving in the opposite direction.

For most indicators, the states vary widely. For example:

The number of science and engineering bachelor's degrees awarded in a state ranges from 9 (Alaska) to 39 (Vermont) per 1,000 individuals age 18-24.

The share of a state's workforce employed in science and engineering occupations ranged from 2.2 percent (Mississippi) to 7.6 percent (Virginia).

The amount of R&D performed, as a share of a state's gross domestic product (GDP), ranged from 0.3 percent (Wyoming) to 8 percent (New Mexico).

"These data can shed new light on policy discussions," Arvizu said. "If you're lagging behind neighboring states or the rest of the nation, it may inform your assessment of the quality of your educational system or workforce, and what you may need to do to enhance your economic position and competitiveness."

Researchers can also conduct their own analyses of the data by, for example, studying possible interrelationships among different indicators, such as education, R&D, and economic activity.

The state data tool is produced by NSF's National Center for Science and Engineering Statistics. It supplements the latest edition of Science and Engineering Indicators, a 600-page volume that is the most comprehensive federal information and analysis of the nation's position in science and engineering education and research. The Indicators report was released in February 2014.

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.

The newly updated, online, interactive state data tool allows policymakers, educators and other users to discern trends in education, science and research in each of the 50 states. This free resource supplements the state data in the 2014 Science and Engineering Indicators report, the premier source of information and analysis of the nation’s position in science and engineering education and research. The biennial report is published by the National Science Board, the policy making body of the National Science Foundation (NSF).

The tool features 59 state indicators of state performance in education, the scientific workforce, research and development (R&D) investments and activities, and high-tech business. It offers tables, charts and graphs, and permits users to view and customize data in multiple ways, such as making comparisons with other states, looking at 20 year trends, and translating financial information from current into constant dollars.

“R&D and human capital are major drivers of innovation and the economy,” said Dan Arvizu, chairman of the National Science Board. “This is a valuable resource for those who wish to see how their state is doing. Whether it’s educational achievement, your state’s workforce, or R&D investments, it’s an excellent tool to see how your state stacks up. And it will inform debates over state policies and programs.”

Arvizu said the tool is an especially valuable resource for educators and state policymakers in understanding their state’s educational landscape and for corporations and economic development officials interested in a state’s workforce or technology-based business potential.

The state data tool includes indicators on:

State policymakers and other users can consider such factors as how their state compares with neighboring or similar states, as well as with the national average. They can see whether their state is following national trends, such as conducting more R&D over the last decade, or moving in the opposite direction.

For most indicators, the states vary widely. For example:

The number of science and engineering bachelor’s degrees awarded in a state ranges from 9 (Alaska) to 39 (Vermont) per 1,000 individuals age 18-24.

The share of a state’s workforce employed in science and engineering occupations ranged from 2.2 percent (Mississippi) to 7.6 percent (Virginia).

The amount of R&D performed, as a share of a state’s gross domestic product (GDP), ranged from 0.3 percent (Wyoming) to 8 percent (New Mexico).

“These data can shed new light on policy discussions,” Arvizu said. “If you’re lagging behind neighboring states or the rest of the nation, it may inform your assessment of the quality of your educational system or workforce, and what you may need to do to enhance your economic position and competitiveness.”

Researchers can also conduct their own analyses of the data by, for example, studying possible interrelationships among different indicators, such as education, R&D, and economic activity.

The state data tool is produced by NSF’s National Center for Science and Engineering Statistics. It supplements the latest edition of Science and Engineering Indicators, a 600-page volume that is the most comprehensive federal information and analysis of the nation’s position in science and engineering education and research. The Indicators report was released in February 2014.

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.