Research out of the San Diego Supercomputer Center and Cooperative Association for Internet Data Analysis (CAIDA) at the University of California, San Diego, in collaboration with the Universitat de Barcelona in Spain and the University of Cyprus.

The release, ahem, article:

SDSC Collaboration Aims to Create First Accurate Geometric Map of the Internet

September 09, 2010

By Jan Zverina

The San Diego Supercomputer Center and Cooperative Association for Internet Data Analysis (CAIDA) at the University of California, San Diego, in a collaboration with researchers from Universitat de Barcelona in Spain and the University of Cyprus, have created the first geometric “atlas” of the Internet as part of a project to prevent our most ubiquitous form of communication from collapsing within the next decade or so.

In a paper published this week in Nature Communications, CAIDA researcher Dmitri Krioukov, along with Marián Boguñá (Universitat de Barcelona) and Fragkiskos Papadopoulos (University of Cyprus), describe how they discovered a latent hyperbolic, or negatively curved, space hidden beneath the Internet’s topology, leading them to devise a method to create an Internet map using hyperbolic geometry. In their paper, Sustaining the Internet with Hyperbolic Mapping, the researchers say such a map would lead to a more robust Internet routing architecture because it simplifies path-finding throughout the network.

“We compare routing in the Internet today to using a hypothetical road atlas, which is really just a long encoded list of road intersections and connections that would require drivers to pore through each line to plot a course to their destination without using any geographical, or geometrical, information which helps us navigate through the space in real life,” said Krioukov, principal investigator of the project.

Now imagine that a road – or in the case of the Internet, a connection – is closed for some reason and there is no geographical atlas to plot a new course, just a long list of connections that need to be updated. “That is basically how routing in the Internet works today – it is based on a topographical map that does not take into account any geometric coordinates in any space,” said Krioukov, who with his colleagues at CAIDA have been managing a project called Archipelago, or Ark, that constantly monitors the topology of the Internet, or the structure of its interconnections.

Like many experts, however, Krioukov is concerned that existing Internet routing, which relies on only this topological information, is not really sustainable. “It is very complicated, inefficient, and difficult to scale to the rapidly growing size of the Internet, which is now accessed by more than a billion people each day. In fact, we are already seeing parts of the Internet become intermittently unreachable, sinking into so-called black holes, which is a clear sign of instability.”

Krioukov and his colleagues have developed an in-depth theory that uses hyperbolic geometry to describe a negatively curved shape of complex networks such as the Internet. This theory appears in paper Hyperbolic Geometry of Complex Networks, published by Physical Review E today. In their Nature Communications paper, the researchers employ this theory, Ark’s data, and statistical inference methods to build a geometric map of the Internet. They show that routing using such a map would be superior to the existing routing, which is based on pure topology.

Instead of perpetually accessing and rebuilding a reference list of all available network paths, each router in the Internet would know only its hyperbolic coordinates and the coordinates of its neighbors so it could route in the right direction, only relaying the information to its closest neighbor in that direction, according to the researchers. Known as “greedy routing”, this process would dramatically increase the overall efficiency and scalability of the Internet. “We believe that using such a routing architecture based on hyperbolic geometry will create the best possible levels of efficiency in terms of speed, accuracy, and resistance to damage,” said Krioukov.

However the researchers caution that actually implementing and deploying such a routing structure in the Internet might be as challenging, if not more challenging, than discovering its hidden space. “There are many technical and non-technical issues to be resolved before the Internet map that we found would be the map that the Internet uses,” said Krioukov.

The research was in part funded by the National Science Foundation, along with Spain’s Direcção Geral de Ensino Superior (DGES), Generalitat de Catalunya, and by Cisco Systems. The Internet mapping paper as published in Nature Communications can be found here. The Physical Review E paper can be found here.

Stem cell research is back all over the news again with court rulings and counter rulings making the subject either okay, or not okay, for federal funding. It’s a crazy debate to my mind because stem cell research has the potential to improve the health of many, many people and it’s a philosophical crime for it to be held hostage to the mythology of theocons. And even if the research is held back in the United States by lack of government money, it will be going on around the world and just pushing the U.S. that much farther behind in the cutting edge of medical research.

What Progress Has Been Made, What Is Its Potential?

New York, NY, September 9, 2010 – The use of stem cells for research and their possible application in the treatment of disease are hotly debated topics. In a special issue of Translational Research published this month an international group of medical experts presents an in-depth and balanced view of the rapidly evolving field of stem cell research and considers the potential of harnessing stem cells for therapy of human diseases including cardiovascular diseases, renal failure, neurologic disorders, gastrointestinal diseases, pulmonary diseases, neoplastic diseases, and type 1 diabetes mellitus.

Personalized cell therapies for treating and curing human diseases are the ultimate goal of most stem cell-based research. But apart from the scientific and technical challenges, there are serious ethical concerns, including issues of privacy, consent and withdrawal of consent for the use of unfertilized eggs and embryos. “Publication of this special issue could not have been more timely, given the recent federal district court injunction against federal support for human embryonic stem cell research,” said Dr. Jeffrey Laurence, M.D., Professor of Medicine at Weill Cornell Medical College and Editor in Chief of Translational Research. “This court order stops all pending federal grants and contracts, as well as their peer review, suspending over 20 major research programs and over $50 million in federal funding for them,” he noted. As Dr. Francis Collins, NIH director, stated, “This decision has the potential to do serious damage to one of the most promising areas of biomedical research, just at the time when we were really gaining momentum.”

Through a series of authoritative articles authors highlight basic and clinical research using human embryonic and adult stem cells. Common themes include preclinical evidence supporting the potential therapeutic use of stem cells for acute and chronic diseases, the challenges in translating the preclinical work to clinical applications, as well as the results of several randomized clinical trials. Authors stress that considerable preclinical work is needed to test the potential of these approaches for translation to the clinical setting.

In considering the potential for clinical applications, some common challenges and questions persist. The issue focuses on critical questions such as whether the use of any stem cell population will increase the risk of cancer in the recipient and whether the goal of stem cell therapy is to deliver cells that can function as organ-specific cells.

Writing in a commentary on advances and challenges in translating stem cell therapies for clinical diseases, Michael A. Matthay, MD, Cardiovascular Research Institute, University of California San Francisco, notes that “the progress that has been achieved in the last 30 years in using allogeneic and autologous hematopoietic stem cells for the effective treatment of hematologic malignancies should serve as a model of how clinical applications may yet be achieved with embryonic stem cells, induced pluripotent stem cells, endothelial progenitor cells, and mesenchymal stem cells. Although several challenges exist in translating stem cell therapy to provide effective new treatments for acute and chronic human diseases, the potential for developing effective new cell-based therapies is high.”

KEY POINTS:

Bone marrow and circulating stem/progenitor cells for regenerative cardiovascular therapy
Mohamad Amer Alaiti, Masakazu Ishikawa, and Marco A. Costa
Despite initial promising pilot studies, only small improvements in a few clinical outcomes have been seen using stem cell therapies to treat heart disease in the acute or chronic setting. But new research, and a multitude of new pilot studies, may alter this scenario.

Endothelial lineage cell as a vehicle for systemic delivery of cancer gene therapy
Arkadiusz Z. Dudek
Rather than focusing on the cancer cell itself, attention to blood vessels feeding the cancerous cells, lined by endothelial cells, presents a new avenue of cancer therapy. The author discusses recent evidence that endothelial progenitor cells may be useful in treating primary and metastatic tumors. Targeted cancer gene therapy using endothelial lineage cells to target tumor sites and produce a therapeutic protein has proven feasible.

Pluripotent stem cell-derived natural killer cells for cancer therapy
David A. Knorr, and Dan S. Kaufman
The potential value as well as challenges of using human embryonic stem cells and induced pluripotent stem cells is to provide platforms for new cell-based therapies to treat malignant diseases are discussed.

Translation of stem cell therapy for neurological diseases
Sigrid C. Schwarz, and Johannes Schwarz
Early clinical work to develop cell-based therapy for neurologic disorders such as Parkinson’s disease is discussed.

Stem cell technology for the treatment of acute and chronic renal failure
Christopher J. Pino, and H. David Humes
The authors cover the relative potential and success to date of embryonic or induced pluripotent stem cells as therapies for regenerating functional kidney tissue.

Stem cell approaches for the treatment of type 1 diabetes mellitus
Ryan T. Wagner, Jennifer Lewis, Austin Cooney, and Lawrence Chan
The authors provide a thorough discussion of the potential of using either embryonic stem cells or induced pluripotent stem cells to generate functional islet cells, the cells of the pancreas which normally make insulin, but fail to do so in severe forms of diabetes.

The articles appear in Translational Research, The Journal of Laboratory and Clinical Medicine, Volume 156, Issue 3 (September 2010) entitled Stem Cells: Medical Promises and Challenges, published by Elsevier. The entire issue will be available online via Open Access for a 3-month period beginning September 20, 2010 at www.translationalres.com.

By drilling a tiny pore just a few-nanometers in diameter, called a nanopore, in the graphene membrane, they were able to measure exchange of ions through the pore and demonstrated that a long DNA molecule can be pulled through the graphene nanopore just as a thread is pulled through the eye of a needle.

“By measuring the flow of ions passing through a nanopore drilled in graphene we have demonstrated that the thickness of graphene immersed in liquid is less then 1 nm thick, or many times thinner than the very thin membrane which separates a single animal or human cell from its surrounding environment,” says lead author Slaven Garaj, a Research Associate in the Department of Physics at Harvard. “This makes graphene the thinnest membrane able to separate two liquid compartments from each other. The thickness of the membrane was determined by its interaction with water molecules and ions.”

Via KurzweilAI.net — Not too sure if I like this idea. Seems like we’re already heading down the path of breaking Asimov’s robotic laws with a lot of milbots in development and practice.

From the link:

We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered,” said Ronald Arkin, a Regents professor in the Georgia Tech School of Interactive Computing.

The results of robot experiments and theoretical and cognitive deception modeling were published online on September 3 in the International Journal of Social Robotics. Because the researchers explored the phenomenon of robot deception from a general perspective, the study’s results apply to robot-robot and human-robot interactions. This research was funded by the Office of Naval Research.

In the future, robots capable of deception may be valuable for several different areas, including military and search and rescue operations. A search and rescue robot may need to deceive in order to calm or receive cooperation from a panicking victim. Robots on the battlefield with the power of deception will be able to successfully hide and mislead the enemy to keep themselves and valuable information safe.

“Most social robots will probably rarely use deception, but it’s still an important tool in the robot’s interactive arsenal because robots that recognize the need for deception have advantages in terms of outcome compared to robots that do not recognize the need for deception,” said the study’s co-author, Alan Wagner, a research engineer at the Georgia Tech Research Institute.

Federally funded stem cell research back in business. Of course it’s stupid this is even a issue, much less a political football. I wrote out, and deleted, two sentences of snark about christianist theocons, but maybe those thoughts are better left to your imagination. Let’s just say I think the groups pushing against stem cell research are a serious threat to my life, liberty and pursuit of happiness and everyone would be better off if they could just form their own society on an island somewhere and institute whatever manner of holy book law they wanted to live under.

From the link:

A federal appeals court here ruled Thursday that federal financing of embryonic stem cell research could continue while the court considers a judge’s order last month that banned the government from underwriting the work.

The ruling by the United States Court of Appeals could save research mice from being euthanized, cells in petri dishes from starving and scores of scientists from a suspension of paychecks, according to arguments the Obama administration made in the case.

It could also allow the National Institutes of Health to provide $78 million to 44 scientists whose research the agency had previously agreed to finance.

The stay also gives Congress time to consider legislation that would render the ban, and the court case behind it, largely moot, a prospect that some embattled Democrats have welcomed. Despite staunch opposition by some critics, embryonic stem cell research is popular, and a legislative fight on the issue could prove a tonic for Democrats battling a tough political environment.

Don’t see any current practical applications — aside from desalination — on this right now (but now with a proof-of-concept I bet this’ll be leveraged in new research), but it is impressively cool.

From the link:

In the Sept. 10 issue of Science, MIT researchers report that charged molecules, such as the sodium and chloride ions that form when salt is dissolved in water, can not only flow rapidly through carbon nanotubes, but also can, under some conditions, do so one at a time, like people taking turns crossing a bridge. The research was led by associate professor Michael Strano.

The new system allows passage of much smaller molecules, over greater distances (up to half a millimeter), than any existing nanochannel. Currently, the most commonly studied nanochannel is a silicon nanopore, made by drilling a hole through a silicon membrane. However, these channels are much shorter than the new nanotube channels (the nanotubes are about 20,000 times longer), so they only permit passage of large molecules such as DNA or polymers — anything smaller would move too quickly to be detected.

Strano and his co-authors — recent PhD recipient Chang Young Lee, graduate student Wonjoon Choi and postdoctoral associate Jae-Hee Han — built their new nanochannel by growing a nanotube across a one-centimeter-by-one-centimeter plate, connecting two water reservoirs. Each reservoir contains an electrode, one positive and one negative. Because electricity can flow only if protons — positively charged hydrogen ions, which make up the electric current — can travel from one electrode to the other, the researchers can easily determine whether ions are traveling through the nanotube.

Researchers from Australian National University have developed the ability to move particles over distances of up to 1.5 meters, using a hollow laser beam to trap light-absorbing particles in a “dark core.” The particles are then moved up and down the beam of light, which acts like an optical “pipeline.”

“When the small particles are trapped in this dark core very interesting things start to happen,” said Professor Andrei Rode. “As gravity, air currents, and random motions of air molecules around the particle push it out of the center, one side becomes illuminated by the laser while the other lies in darkness. This creates a tiny thrust, known as a photophoretic force that effectively pushes the particle back into the darkened core. In addition to the trapping effect, a portion of the energy from the beam and the resulting force pushes the particle along the hollow laser pipeline.”

Practical applications for this technology include directing and clustering nanoparticles in air, micro-manipulation of objects, sampling of atmospheric aerosols, and low-contamination/non-touch handling of sampling materials for transport of dangerous substances and microbes in small amounts, he said.

I’m a boundary-pusher in scientific research — I love nanotechnology, stem cell research, genetic research, robotics applications, and of course, I love the promise of synthetic biology. This poll finds only one-third of of surveyed adults want to see the field banned until it’s better understood, but a majority do want to see more government oversight.

The release:

The Public Looks At Synthetic Biology — Cautiously

WASHINGTON, DC: Synthetic biology—defined as the design and construction of new biological parts, devices, and systems or re-design of existing natural biological systems for useful purposes—holds enormous potential to improve everything from energy production to medicine, with the global market projected to reach $4.5 billion by 2015. But what does the public know about this emerging field, and what are their hopes and concerns? A new poll of 1,000 U.S. adults conducted by Hart Research Associates and the Synthetic Biology Project at the Woodrow Wilson Center finds that two-thirds of Americans think that synthetic biology should move forward, but with more research to study its possible effects on humans and the environment, while one-third support a ban until we better understand its implications and risks. More than half of Americans believe the federal government should be involved in regulating synthetic biology.

“The survey clearly shows that much more attention needs to be paid to addressing biosafety and biosecurity risks,” said David Rejeski, Director of the Synthetic Biology Project. “In addition, government and industry need to engage the public about the science and its applications, benefits, and risks.”

The poll findings reveal that the proportion of adults who say they have heard a lot or some about synthetic biology has almost tripled in three years, (from 9 percent to 26 percent). By comparison, self-reported awareness of nanotechnology increased from 24 percent to 34 percent during the same three-year period.

Although the public supports continued research in the area of synthetic biology, it also harbors concerns, including 27 percent who have security concerns (concerns that the science will be used to make harmful things), 25 percent who have moral concerns, and a similar proportion who worry about negative health consequences for humans. A smaller portion, 13 percent, worries about possible damage to the environment.

“The survey shows that attitudes about synthetic biology are not clear-cut and that its application is an important factor in shaping public attitudes towards it,” said Geoff Garin, President of Hart Research. Six in 10 respondents support the use of synthetic biology to produce a flu vaccine. In contrast, three-fourths of those surveyed have concerns about its use to accelerate the growth of livestock to increase food production. Among those for whom moral issues are the top concern, the majority views both applications in a negative light.

The findings come from a nationwide telephone survey of 1,000 adults and has a margin of error of ± 3.1 percentage points. This is the fifth year that Hart Research Associates has conducted a survey to gauge public opinion about nanotechnology and/or synthetic biology for the Woodrow Wilson International Center for Scholars.

The Woodrow Wilson International Center for Scholars of the Smithsonian Institution was established by Congress in 1968 and is headquartered in Washington, D.C. It is a nonpartisan institution, supported by public and private funds and engaged in the study of national and world affairs.

NIST recently constructed the world’s most powerful and stable scanning-probe microscope, with an unprecedented combination of low temperature (as low as 10 millikelvin, or 10 thousandths of a degree above absolute zero), ultra-high vacuum and high magnetic field. In the first measurements made with this instrument, the team has used its power to resolve the finest differences in the electron energies in graphene, atom-by-atom.

“Going to this resolution allows you to see new physics,” said Young Jae Song, a postdoctoral researcher who helped develop the instrument at NIST and make these first measurements.

And the new physics the team saw raises a few more questions about how the electrons behave in graphene than it answers.

Because of the geometry and electromagnetic properties of graphene’s structure, an electron in any given energy level populates four possible sublevels, called a “quartet.” Theorists have predicted that this quartet of levels would split into different energies when immersed in a magnetic field, but until recently there had not been an instrument sensitive enough to resolve these differences.

“When we increased the magnetic field at extreme low temperatures, we observed unexpectedly complex quantum behavior of the electrons,” said NIST Fellow Joseph Stroscio.

What is happening, according to Stroscio, appears to be a “many-body effect” in which electrons interact strongly with one another in ways that affect their energy levels.

A Q&A session will follow the briefing. The Briefing is open to media and the public, but space is limited. You can visit http://briefing.singularityu.org/ to register for the webinar briefing.

Singularity University (SU) is an interdisciplinary university whose mission is to assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies in order to address humanity’s grand challenges. With the support of a broad range of leaders in academia, business and government, SU hopes to stimulate groundbreaking, disruptive thinking and solutions aimed at solving some of the planet’s most pressing challenges. SU is based at the NASA Ames campus in Silicon Valley. For more information, go to www.singularityu.org and follow SU on Twitter and Facebook.

The flight is short (both in time and height — try around five feet) but it is another proof-of-concept for a method of getting off the surface of the Earth without the use of huge liquid or solid fuel boosters.

The video:

Also from the link:

Early this year, scientists in Japan successfully “launched” a tiny metal rocket using an unusual source of thrust – microwaves. The test was the latest a proof of principle for a kind of propulsion that has never been the beneficiary of the levels of investment poured into traditional chemical rockets, but which its proponents say could some day be a superior way to get spacecraft into orbit.

Our approach leverages advances in 3 exponentially growing fields: synthetic biology, nanotechnology, and solar energy. Synthetic biology is a factor because synthetic molecules are currently being developed that can create ionic bonds with sodium and chloride molecules, enabling fresh water to pass through a nanofilter using only the pressure of the water above the pipe.

Nanotechnology is relevant for reverse osmosis, because using thinner filter further reduces the amount of pressure required to separate fresh water from salt water. A filtration cube measuring 165mm (6.5 inches) per side could produce 100,000 gallons of purified water per day at 1 psi. Finally, as advances in solar energy improve the efficiency of photovoltaics, the throughput of solar pumps will increase significantly, enabling more efficient movement and storage of fresh water.

Although the individual components described above have not advanced to a point where the solution is possible at present, we were able to speak with leading experts in each of these areas as to the timeline for these capabilities to be realized.

Synthetic molecules capable of bonding with sodium and chloride molecules have already been created, but have not yet been converted to an appropriate form for storage, such as a cartridge. This is expected to occur in the next 2-3 years. Filters are currently in the 10-15nm range, and are expected to reach 1nm over the next 3-5 years. As with the synthetic molecules, 1nm tubes have been built; just not assembled into a filter at this point. Photovoltaics are currently approximately 12% efficient, but it is anticipated that 20% efficiency is achievable in the next 5 years.

A possible implementation of our Naishio solution. The pressure from the water volume is sufficient to propel fresh water across the membrane (A), and photovoltaics (D) generate all the energy needed to pump water from the repository (C) to the water tank and circulator (E). Sensors (B) communicate between the solar pump and membrane to regulate the water level and ensure it doesn’t become contaminated. (Image: Sarah Jane Pell).

Magic mushrooms reduce anxiety over cancer

September 7, 2010

The active ingredient of magic mushrooms, psilocybin, has been shown to reduce anxiety and improve mood in people with cancer. researchers from Harbor-UCLA Medical Center have found.

Volunteers reported feeling less depressed and anxious two weeks after receiving psilocybin. Six months later, the level of depression was significantly lower in all volunteers than it had been before the treatments began.

Okay, just yesterday I blogged that a lot of the time the mundane “a ha” moment that puts together well-known materials and processes leads to scientific advancement (the case I was referring to in the post was a simple acid bath technique that made creating solar cells much cheaper). And then again sometimes the big sexy breakthrough gets the headline (as usual) and really deserves it.

If this technique for solar cells that self-assembles the light-harvesting element in the cell, and then breaks it down for re-assembly essentially copying what plants do in their chloroplast, is able to reach acceptable levels of efficiency, it will be an absolute game-changer. Instead of a solar cell that’s (hopefully) constantly bombarded with the full effect of the sun and constantly degrading under the solar assault, these cells will essentially be completely renewed by each reassembly. No degradation over time, just a brand new light-harvesting element with a relatively simple chemical process.

From the second link:

The system Strano’s team produced is made up of seven different compounds, including the carbon nanotubes, the phospholipids, and the proteins that make up the reaction centers, which under the right conditions spontaneously assemble themselves into a light-harvesting structure that produces an electric current. Strano says he believes this sets a record for the complexity of a self-assembling system. When a surfactant — similar in principle to the chemicals that BP has sprayed into the Gulf of Mexico to break apart oil — is added to the mix, the seven components all come apart and form a soupy solution. Then, when the researchers removed the surfactant by pushing the solution through a membrane, the compounds spontaneously assembled once again into a perfectly formed, rejuvenated photocell.

“We’re basically imitating tricks that nature has discovered over millions of years” — in particular, “reversibility, the ability to break apart and reassemble,” Strano says. The team, which included postdoctoral researcher Moon-Ho Ham and graduate student Ardemis Boghossian, came up with the system based on a theoretical analysis, but then decided to build a prototype cell to test it out. They ran the cell through repeated cycles of assembly and disassembly over a 14-hour period, with no loss of efficiency.

A relatively simple brute force manufacturing step creates solar cells at much lower cost. The big, sexy breakthroughs are great and technological leaps are fun, but a lot of the time it’s the almost mundane “a ha” moment that puts together well-known materials and processes that take a technology to the next step. This particular discovery sounds very promising since it both reduces production costs and almost retains maximum solar efficiency.

From the link:

A new low-cost etching technique developed at the U.S. Department of Energy’s National Renewable Energy Laboratory can put a trillion holes in a silicon wafer the size of a compact disc.

As the tiny holes deepen, they make the silvery-gray silicon appear darker and darker until it becomes almost pure black and able to absorb nearly all colors of light the sun throws at it.

At room temperature, the black silicon wafer can be made in about three minutes. At 100 degrees F, it can be made in less than a minute.

The breakthrough by NREL scientists likely will lead to lower-cost solar cells that are nonetheless more efficient than the ones used on rooftops and in solar arrays today.

R&D Magazine recently awarded the NREL team one of its R&D 100 awards for Black Silicon Nanocatalytic Wet-Chemical Etch. Called “the Oscars of Invention,” the R&D 100 awards recognize the most significant scientific breakthroughs of the year.

Also from the link (and conveniently making my point above about “almost mundane ‘a ha’ moment”s):

In a string of outside-the-box insights combined with some serendipity, Branz and colleagues Scott Ward, Vern Yost and Anna Duda greatly simplified that process.

Rather than laying the gold with vacuums and pumps, why not just spray it on? Ward suggested.

Rather than layering the gold and then adding the acidic mixture, why not mix it all together from the outset? Dada suggested.

In combination, those two suggestions yielded even better results.

A silver wafer reflects the face of NREL research scientist Hao-Chih Yuan, before the wafer is washed with a mix of acids. The acids etch holes, absorbing light and turning the wafer black. Credit: Dennis Schroeder

Via KurzweilAI.net — Great news, but as always I’d love to see a market-ready application come out of this research in the near future. Blogging about nanotech breakthroughs is all well and good, but it is excellent when I get the chance to blog about a real-world application of said breakthroughs.

From the link:

High-speed graphene transistors achieve world-record 300 GHz

September 3, 2010 by Editor

UCLA researchers have fabricated the fastest graphene transistor to date, using a new fabrication process with a nanowire as a self-aligned gate.

Self-aligned gates are a key element in modern transistors, which are semiconductor devices used to amplify and switch electronic signals. Gates are used to switch the transistor between various states, and self-aligned gates were developed to deal with problems of misalignment encountered because of the shrinking scale of electronics.

“This new strategy overcomes two limitations previously encountered in graphene transistors,” professor of chemistry and biochemistry Xiangfeng Duan said. “First, it doesn’t produce any appreciable defects in the graphene during fabrication, so the high carrier mobility is retained. Second, by using a self-aligned approach with a nanowire as the gate, the group was able to overcome alignment difficulties previously encountered and fabricate very short-channel devices with unprecedented performance.”

These advances allowed the team to demonstrate the highest speed graphene transistors to date, with a cutoff frequency up to 300 GHz — comparable to the very best transistors from high-electron mobility materials such gallium arsenide or indium phosphide.

Graphene, a one-atom-thick layer of graphitic carbon, has great potential to make electronic devices such as radios, computers and phones faster and smaller. With the highest known carrier mobility — the speed at which electronic information is transmitted by a material — graphene is a good candidate for high-speed radio-frequency electronics. High-speed radio-frequency electronics may also find wide applications in microwave communication, imaging and radar technologies.

Funding for this research came from the National Science Foundation and the National Institutes of Health.

Atomic force micrograph of ~1 micrometer wide × 1.5 micrometers (millionths of a meter) tall area. The ice crystals (lightest blue) are 0.37 nanometers (billionths of a meter) high, which is the height of a 2-water molecule thick ice crystal. A one-atom thick sheet of graphene is used to conformally coat and trap water that has adsorbed onto a mica surface, permitting it to be imaged and characterized by atomic force microscopy. Detailed analysis of such images reveals that this (first layer) of water is ice, even at room temperature. At high humidity levels, a second layer of water will coat the first layer, also as ice. At very high humidity levels, additional layers of water will coat the surface as droplets. Credit: Heath group/Caltech

News from the world of stem cell research. This item comes from the United Kingdom, and if the current political climate on the right towards ground-breaking science and medical research holds fast most stem cell news will be coming from anywhere but the United States.

This development does look very promising.

From the link:

In a paper published in the September edition of Nature Materials, a team of Nottingham scientists led by Professor Morgan Alexander in the University’s School of Pharmacy, reveal they have discovered some man-made acrylate polymers which allow stem cells to reproduce while maintaining their pluripotency.

Professor Alexander said: “This is an important breakthrough which could have significant implications for a wide range of stem cell therapies, including cancer, heart failure, muscle damage and a number of neurological disorderssuch as Parkinson’s and Huntington’s.

“One of these new manmade materials may translate into an automated method of growing pluripotent stem cells which will be able to keep up with demand from emerging therapies that will require cells on an industrial scale, while being both cost-effective and safer for patients.”

Sometimes when I run a “beautiful space image” post the beauty is in the awe-inspiringness of the image, and other times the photo might not be much to look at, but it is just amazing on its own merits.

A team of astronomers led by the University of Colorado at Boulder is charting the interactions between Supernova 1987A and a glowing gas ring encircling the supernova remnant known as the “String of Pearls.” Credit: NASA

Also from the link:

The team detected significant brightening of the emissions from Supernova 1987A, which were consistent with some theoretical predictions about how supernovae interact with their immediate galactic environment. Discovered in 1987, Supernova 1987A is the closest exploding star to Earth to be detected since 1604 and resides in the nearby Large Magellanic Cloud, a dwarf galaxy adjacent to our own Milky Way Galaxy.

The team observed the supernova in optical, ultraviolet and near-infrared light, charting the interplay between the stellar explosion and the famous “String of Pearls,” a glowing ring 6 trillion miles in diameter encircling the supernova remnant that has been energized by X-rays. The gas ring likely was shed some 20,000 years before the supernova exploded, and shock waves rushing out from the remnant have been brightening some 30 to 40 pearl-like “hot spots” in the ring — objects that likely will grow and merge together in the coming years to form a continuous, glowing circle.

And announcing the first five solar missions. No need to rush and book reservations, though, since this mission is a good eight years from launch.

News hot from today’s inbox.

The release:

NASA Selects Investigations for First Mission to Encounter the Sun

WASHINGTON, Sept. 2 /PRNewswire-USNewswire/ — NASA has begun development of a mission to visit and study the sun closer than ever before. The unprecedented project, named Solar Probe Plus, is slated to launch no later than 2018.

The small car-sized spacecraft will plunge directly into the sun’s atmosphere approximately four million miles from our star’s surface. It will explore a region no other spacecraft ever has encountered. NASA has selected five science investigations that will unlock the sun’s biggest mysteries.

“The experiments selected for Solar Probe Plus are specifically designed to solve two key questions of solar physics — why is the sun’s outer atmosphere so much hotter than the sun’s visible surface and what propels the solar wind that affects Earth and our solar system?” said Dick Fisher, director of NASA’s Heliophysics Division in Washington. “We’ve been struggling with these questions for decades and this mission should finally provide those answers.”

As the spacecraft approaches the sun, its revolutionary carbon-composite heat shield must withstand temperatures exceeding 2550 degrees Fahrenheit and blasts of intense radiation. The spacecraft will have an up close and personal view of the sun enabling scientists to better understand, characterize and forecast the radiation environment for future space explorers.

NASA invited researchers in 2009 to submit science proposals. Thirteen were reviewed by a panel of NASA and outside scientists. The total dollar amount for the five selected investigations is approximately $180 million for preliminary analysis, design, development and tests.

The selected proposals are:

— Solar Wind Electrons Alphas and Protons Investigation: principal investigator, Justin C. Kasper, Smithsonian Astrophysical Observatory in Cambridge, Mass. This investigation will specifically count the most abundant particles in the solar wind — electrons, protons and helium ions — and measure their properties. The investigation also is designed to catch some of the particles in a special cup for direct analysis.

— Wide-field Imager: principal investigator, Russell Howard, Naval Research Laboratory in Washington. This telescope will make 3-D images of the sun’s corona, or atmosphere. The experiment actually will see the solar wind and provide 3-D images of clouds and shocks as they approach and pass the spacecraft. This investigation complements instruments on the spacecraft providing direct measurements by imaging the plasma the other instruments sample.

— Fields Experiment: principal investigator, Stuart Bale, University of California Space Sciences Laboratory in Berkeley, Calif. This investigation will make direct measurements of electric and magnetic fields, radio emissions, and shock waves that course through the sun’s atmospheric plasma. The experiment also serves as a giant dust detector, registering voltage signatures when specks of space dust hit the spacecraft’s antenna.

— Integrated Science Investigation of the Sun: principal investigator, David McComas of the Southwest Research Institute in San Antonio. This investigation consists of two instruments that will take an inventory of elements in the sun’s atmosphere using a mass spectrometer to weigh and sort ions in the vicinity of the spacecraft.

— Heliospheric Origins with Solar Probe Plus: principal investigator, Marco Velli of NASA’s Jet Propulsion Laboratory in Pasadena, Calif. Velli is the mission’s observatory scientist, responsible for serving as a senior scientist on the science working group. He will provide an independent assessment of scientific performance and act as a community advocate for the mission.

“This project allows humanity’s ingenuity to go where no spacecraft has ever gone before,” said Lika Guhathakurta, Solar Probe Plus program scientist at NASA Headquarters, in Washington. “For the very first time, we’ll be able to touch, taste and smell our sun.”

The Solar Probe Plus mission is part of NASA’s Living with a Star Program. The program is designed to understand aspects of the sun and Earth’s space environment that affect life and society. The program is managed by NASA’S Goddard Space Flight Center in Greenbelt, Md., with oversight from NASA’s Science Mission Directorate’s Heliophysics Division. The Johns Hopkins University Applied Physics Laboratory in Laurel, Md., is the prime contractor for the spacecraft.

Everyone thought the biggest threat from China was the sheer volume of Treasuries held by that nation and the potential stranglehold it has over the U.S. economy. Realistically that has never been a real issue because as such a heavy investor in the U.S. economy, China has a vested interest in our financial sector remaining strong.

Now squeezing us on manufacturing vital elements of computing and electronics by taking complete control over rare earth metals is a different angle of attack altogether. You know the U.S. government is taking this very seriously when it has both the energy department and the DoD on the job.

The release:

China’s monopoly on 17 key elements sets stage for supply crisis

China’s monopoly on the global supply of elements critical for production of computer hard disc drives, hybrid-electric cars, military weapons, and other key products — and its increasingly strict limits on exports — is setting the stage for a crisis in the United States. That’s the topic of the cover story of Chemical & Engineering News (C&EN), ACS’ weekly newsmagazine.

C&EN Senior Editor Mitch Jacoby and Contributing Editor Jessie Jiang explain that the situation involves a family of chemical elements that may soon start to live up to their name, the “rare earths.” China has virtually cornered the global market on them, and produces most of the world’s supply. Since 2005, China has been raising prices and restricting exports, most recently in 2010, fostering a potential supply crisis in the U.S.

The article describes how the U.S. is now responding to this emerging crisis. To boost supplies, for instance, plans are being developed to resume production at the largest U.S. rare-earth mine — Mountain Pass in southern California — which has been dormant since 2002. The U.S. Department of Energy and the Department of Defense are among the government agencies grappling with the problem.

This visible light image, made with the Wide Field Imager on the MPG/ESO 2.2-meter telescope at the La Silla Observatory in Chile, shows the galaxy NGC 4666 in the center. It is a starburst galaxy, about 80 million light-years from Earth, in which particularly intense star formation is taking place. The starburst is thought to be caused by gravitational interactions with neighboring galaxies, including NGC 4668, visible to the lower left. A combination of supernova explosions and strong winds from massive stars in the starburst region drives a vast outflow of gas from the galaxy into space — a so-called “superwind”. NGC 4666 had previously been observed in X-rays by the ESA XMM-Newton space telescope, and these visible light observations were made to target background objects detected in the earlier X-ray images. This picture, which covers a field of 16 by 12 arcminutes, is a combination of twelve CCD frames, 67 megapixels each, taken through blue, green and red filters. Credit: ESO/J. Dietrich

Hit the link up there for more about NGC 4666, and a (sorta cheesy) video of its location in space. And for even more info, here’s the release.

Looks pretty promising. I haven’t blogged about alternative lighting in a while, but I remain very fascinated about the potential for LED lighting. I have two LED bulbs right now, and as cool as they are (figuratively and literally) they suffer from the main complaints against LEDs right now — they are quite dim (albeitly by design in these particular bulb’s case) and they are very unidirectional and suitable only for spot lighting applications.

Here’s the latest news in LEDs and looks to be quite ambitious and very interesting. I’m looking forward to being able to replace all my residential lighting with crazy long-lasting and cheap-to-run LEDs.

From the link:

Researchers from the Nichia Corporation in Tokushima, Japan, have set an ambitious goal: to develop a white LED that can replace every interior and exterior light bulb currently used in homes and offices. The properties of their latest white LED – a luminous flux of 1913 lumens and a luminous efficacy of 135 lumens per watt at 1 amp – enable it to emit more light than a typical 20-watt fluorescent bulb, as well as more light for a given amount of power. With these improvements, the researchers say that the new LED can replace traditional fluorescent bulbs for all general lighting applications, and also be used for automobile headlights and LCD backlighting.

The history of luminous efficacy in different types of lighting shows the rapid improvements in white LEDs. The years in which the white light sources were developed are also shown. Credit: Yukio Narukawa, et al.

Silicon oxide circuits break barrier

Nanocrystal conductors could lead to massive, robust 3-D storage

Rice University scientists have created the first two-terminal memory chips that use only silicon, one of the most common substances on the planet, in a way that should be easily adaptable to nanoelectronic manufacturing techniques and promises to extend the limits of miniaturization subject to Moore’s Law.

Last year, researchers in the lab of Rice Professor James Tour showed how electrical current could repeatedly break and reconnect 10-nanometer strips of graphite, a form of carbon, to create a robust, reliable memory “bit.” At the time, they didn’t fully understand why it worked so well.

Now, they do. A new collaboration by the Rice labs of professors Tour, Douglas Natelson and Lin Zhong proved the circuit doesn’t need the carbon at all.

Jun Yao, a graduate student in Tour’s lab and primary author of the paper to appear in the online edition of Nano Letters, confirmed his breakthrough idea when he sandwiched a layer of silicon oxide, an insulator, between semiconducting sheets of polycrystalline silicon that served as the top and bottom electrodes.

Applying a charge to the electrodes created a conductive pathway by stripping oxygen atoms from the silicon oxide and forming a chain of nano-sized silicon crystals. Once formed, the chain can be repeatedly broken and reconnected by applying a pulse of varying voltage.

The nanocrystal wires are as small as 5 nanometers (billionths of a meter) wide, far smaller than circuitry in even the most advanced computers and electronic devices.

“The beauty of it is its simplicity,” said Tour, Rice’s T.T. and W.F. Chao Chair in Chemistry as well as a professor of mechanical engineering and materials science and of computer science. That, he said, will be key to the technology’s scalability. Silicon oxide switches or memory locations require only two terminals, not three (as in flash memory), because the physical process doesn’t require the device to hold a charge.

It also means layers of silicon-oxide memory can be stacked in tiny but capacious three-dimensional arrays. “I’ve been told by industry that if you’re not in the 3-D memory business in four years, you’re not going to be in the memory business. This is perfectly suited for that,” Tour said.

Silicon-oxide memories are compatible with conventional transistor manufacturing technology, said Tour, who recently attended a workshop by the National Science Foundation and IBM on breaking the barriers to Moore’s Law, which states the number of devices on a circuit doubles every 18 to 24 months.

“Manufacturers feel they can get pathways down to 10 nanometers. Flash memory is going to hit a brick wall at about 20 nanometers. But how do we get beyond that? Well, our technique is perfectly suited for sub-10-nanometer circuits,” he said.

Austin tech design company PrivaTran is already bench testing a silicon-oxide chip with 1,000 memory elements built in collaboration with the Tour lab. “We’re real excited about where the data is going here,” said PrivaTran CEO Glenn Mortland, who is using the technology in several projects supported by the Army Research Office, National Science Foundation, Air Force Office of Scientific Research, and the Navy Space and Naval Warfare Systems Command Small Business Innovation Research (SBIR) and Small Business Technology Transfer programs.

“Our original customer funding was geared toward more high-density memories,” Mortland said. “That’s where most of the paying customers see this going. I think, along the way, there will be side applications in various nonvolatile configurations.”

Yao had a hard time convincing his colleagues that silicon oxide alone could make a circuit. “Other group members didn’t believe him,” said Tour, who added that nobody recognized silicon oxide’s potential, even though it’s “the most-studied material in human history.”

“Most people, when they saw this effect, would say, ‘Oh, we had silicon-oxide breakdown,’ and they throw it out,” he said. “It was just sitting there waiting to be exploited.”

In other words, what used to be a bug turned out to be a feature.

Yao went to the mat for his idea. He first substituted a variety of materials for graphite and found none of them changed the circuit’s performance. Then he dropped the carbon and metal entirely and sandwiched silicon oxide between silicon terminals. It worked.

“It was a really difficult time for me, because people didn’t believe it,” Yao said. Finally, as a proof of concept, he cut a carbon nanotube to localize the switching site, sliced out a very thin piece of silicon oxide by focused ion beam and identified a nanoscale silicon pathway under a transmission electron microscope.

“This is research,” Yao said. “If you do something and everyone nods their heads, then it’s probably not that big. But if you do something and everyone shakes their heads, then you prove it, it could be big.

“It doesn’t matter how many people don’t believe it. What matters is whether it’s true or not.”

They will also be resistant to radiation, which should make them suitable for military and NASA applications. “It’s clear there are lots of radiation-hardened uses for this technology,” Mortland said.

Silicon oxide also works in reprogrammable gate arrays being built by NuPGA, a company formed last year through collaborative patents with Rice University. NuPGA’s devices will assist in the design of computer circuitry based on vertical arrays of silicon oxide embedded in “vias,” the holes in integrated circuits that connect layers of circuitry. Such rewritable gate arrays could drastically cut the cost of designing complex electronic devices.

###

Zhengzong Sun, a graduate student in Tour’s lab, was co-author of the paper with Yao; Tour; Natelson, a Rice professor of physics and astronomy; and Zhong, assistant professor of electrical and computer engineering.

The David and Lucille Packard Foundation, the Texas Instruments Leadership University Fund, the National Science Foundation, PrivaTran and the Army Research Office SBIR supported the research.

CAPTION: A 1k silicon oxide memory has been assembled by Rice and a commercial partner as a proof-of-concept. Silicon nanowire forms when charge is pumped through the silicon oxide, creating a two-terminal resistive switch. (Images courtesy Jun Yao/Rice University)

(Note: I recommend hitting the link for the first image — 0830_F2.jpg. It’s too big to run in this blog full-size, but it’s a great illustration of the chip.)

With hybrids and electric cars becoming more commonplace, the old miles-per-gallon rating just doesn’t cut it for fuel efficiency comparison shopping. So in steps the Environmental Protection Agency with a brand new label. Not sure exactly how clear this is at first glance, but it does offer more than just MPG information.

From the link:

All new cars and light-duty trucks sold in the U.S. are required to have a label that displays fuel economy information that is designed to help consumers make easy and well-informed comparisons between vehicles. Most people recognize the current label (or “window sticker”) by the gas tank graphic and city and highway Miles Per Gallon (MPG) information. EPA has provided fuel economy estimates in City and Highway MPG values for more than 30 years (see how fuel economy has changed).

EPA and the National Highway Traffic Safety Administration (NHTSA) are updating this label to provide consumers with simple, straightforward energy and environmental comparisons across all vehicles types, including electric vehicles (EV), plug-in hybrid electric vehicles (PHEV), and conventional gasoline/diesel vehicles. The agencies are incorporating new information, such as ratings on fuel economy, greenhouse gas emissions, and other air pollutants, onto the label as required by the Energy Independence and Security Act (EISA) of 2007.

The agencies are proposing two different label designs (see right) and are eager to gather public input. Specifically, which design, or design features, would best help you compare the fuel economy, fuel costs, and environmental impacts of different vehicles. Submit a comment on the proposed labels.

Of course, we’ll have to see if this tech is still state-of-the-art three years down the road.

From the link:

An electronic component that offers a new way to squeeze more data into computers and portable gadgets is set to go into production in just a couple of years. Hewlett-Packard announced today that it has entered an agreement with the Korean electronics manufacturer Hynix Semiconductor to make the components, called “memristors,” starting in 2013. Storage devices made of memristors will allow PCs, cellphones, and servers to store more and switch on instantly.

Making memories: This colorized atomic-force microscopy image shows 17 memristors. The circuit elements, shown in green, are formed at the crossroads of metal nanowires.
Credit: StanWilliams, HP Labs

Memristors are nanoscale electronic switches that have a variable resistance, and can retain their resistance even when the power is switched off. This makes them similar to the transistors used to store data in flash memory. But memristors are considerably smaller–as small as three nanometers. In contrast, manufacturers are experimenting with flash memory components that are 20 nanometers in size.

“The goal is to be at least double whatever flash memory is in three years–we know we’ll beat flash in speed, power, and endurance, and we want to beat it in density, too,” says Stanley Williams, a senior fellow at HP who has been developing memristors in his lab for about five years.

By dipping plain cotton cloth in a high-tech broth full of silver nanowires and carbon nanotubes, Stanford researchers have developed a new high-speed, low-cost filter that could easily be implemented to purify water in the developing world.

Instead of physically trapping bacteria as most existing filters do, the new filter lets them flow on through with the water. But by the time the pathogens have passed through, they have also passed on, because the device kills them with an electrical field that runs through the highly conductive “nano-coated” cotton.

In lab tests, over 98 percent of Escherichia coli bacteria that were exposed to 20 volts of electricity in the filter for several seconds were killed. Multiple layers of fabric were used to make the filter 2.5 inches thick.

“This really provides a new water treatment method to kill pathogens,” said Yi Cui, an associate professor of materials science and engineering. “It can easily be used in remote areas where people don’t have access to chemical treatments such as chlorine.”

Cholera, typhoid and hepatitis are among the waterborne diseases that are a continuing problem in the developing world. Cui said the new filter could be used in water purification systems from cities to small villages.

Faster filtering by letting bacteria through

Filters that physically trap bacteria must have pore spaces small enough to keep the pathogens from slipping through, but that restricts the filters’ flow rate.

Since the new filter doesn’t trap bacteria, it can have much larger pores, allowing water to speed through at a more rapid rate.

“Our filter is about 80,000 times faster than filters that trap bacteria,” Cui said. He is the senior author of a paper describing the research that will be published in an upcoming issue of Nano Letters. The paper is available online now.

The larger pore spaces in Cui’s filter also keep it from getting clogged, which is a problem with filters that physically pull bacteria out of the water.

Cui’s research group teamed with that of Sarah Heilshorn, an assistant professor of materials science and engineering, whose group brought its bioengineering expertise to bear on designing the filters.

Silver has long been known to have chemical properties that kill bacteria. “In the days before pasteurization and refrigeration, people would sometimes drop silver dollars into milk bottles to combat bacteria, or even swallow it,” Heilshorn said.

Cui’s group knew from previous projects that carbon nanotubes were good electrical conductors, so the researchers reasoned the two materials in concert would be effective against bacteria. “This approach really takes silver out of the folk remedy realm and into a high-tech setting, where it is much more effective,” Heilshorn said.

Using the commonplace keeps costs down

But the scientists also wanted to design the filters to be as inexpensive as possible. The amount of silver used for the nanowires was so small the cost was negligible, Cui said. Still, they needed a foundation material that was “cheap, widely available and chemically and mechanically robust.” So they went with ordinary woven cotton fabric.

“We got it at Wal-mart,” Cui said.

To turn their discount store cotton into a filter, they dipped it into a solution of carbon nanotubes, let it dry, then dipped it into the silver nanowire solution. They also tried mixing both nanomaterials together and doing a single dunk, which also worked. They let the cotton soak for at least a few minutes, sometimes up to 20, but that was all it took.

The big advantage of the nanomaterials is that their small size makes it easier for them to stick to the cotton, Cui said. The nanowires range from 40 to 100 billionths of a meter in diameter and up to 10 millionths of a meter in length. The nanotubes were only a few millionths of a meter long and as narrow as a single billionth of a meter. Because the nanomaterials stick so well, the nanotubes create a smooth, continuous surface on the cotton fibers. The longer nanowires generally have one end attached with the nanotubes and the other end branching off, poking into the void space between cotton fibers.

“With a continuous structure along the length, you can move the electrons very efficiently and really make the filter very conducting,” he said. “That means the filter requires less voltage.”

Minimal electricity required

The electrical current that helps do the killing is only a few milliamperes strong – barely enough to cause a tingling sensation in a person and easily supplied by a small solar panel or a couple 12-volt car batteries. The electrical current can also be generated from a stationary bicycle or by a hand-cranked device.

The low electricity requirement of the new filter is another advantage over those that physically filter bacteria, which use electric pumps to force water through their tiny pores. Those pumps take a lot of electricity to operate, Cui said.

In some of the lab tests of the nano-filter, the electricity needed to run current through the filter was only a fifth of what a filtration pump would have needed to filter a comparable amount of water.

The pores in the nano-filter are large enough that no pumping is needed – the force of gravity is enough to send the water speeding through.

Although the new filter is designed to let bacteria pass through, an added advantage of using the silver nanowire is that if any bacteria were to linger, the silver would likely kill it. This avoids biofouling, in which bacteria form a film on a filter. Biofouling is a common problem in filters that use small pores to filter out bacteria.

Cui said the electricity passing through the conducting filter may also be altering the pH of the water near the filter surface, which could add to its lethality toward the bacteria.

Cui said the next steps in the research are to try the filter on different types of bacteria and to run tests using several successive filters.

“With one filter, we can kill 98 percent of the bacteria,” Cui said. “For drinking water, you don’t want any live bacteria in the water, so we will have to use multiple filter stages.”

Cui’s research group has gained attention recently for using nanomaterials to build batteries from paper and cloth.

###

David Schoen and Alia Schoen were both graduate students in Materials Science and Engineering when the water-filter research was conducted and are co–lead authors of the paper in Nano Letters. David Schoen is now a postdoctoral researcher at Stanford.

Liangbing Hu, a postdoctoral researcher in Materials Science and Engineering, and Han Sun Kim, a graduate student in Materials Science and Engineering at the time the research was conducted, also contributed to the research and are co-authors of the paper.

Why Americans believe Obama is a Muslim

Published: Aug. 31, 2010

EAST LANSING, Mich. — There’s something beyond plain old ignorance that motivates Americans to believe President Obama is a Muslim, according to a first-of-its-kind study of smear campaigns led by a Michigan State University psychologist.

The research by Spee Kosloff and colleagues suggests people are most likely to accept such falsehoods, both consciously and unconsciously, when subtle clues remind them of ways in which Obama is different from them, whether because of race, social class or other ideological differences.

These judgments, Kosloff argues, are irrational. He also suggests they are fueled by an “irresponsible” media culture that allows political pundits and “talking heads” to perpetuate the lies.

“Careless or biased media outlets are largely responsible for the propagation of these falsehoods, which catch on like wildfire,” said Kosloff, visiting assistant professor of psychology. “And then social differences can motivate acceptance of these lies.”

A Pew Research Center poll in August 2010 found that 18 percent of Americans believe Obama is a Muslim – up from 11 percent in March 2009 – even though he’s a practicing Christian. Kosloff noted that the poll was conducted before Obama’s recent comments supporting the right for Muslims to build a mosque near New York’s Ground Zero.

Kosloff and colleagues launched their study prior to the 2008 U.S. presidential election, as the candidates were being bombarded with smear campaigns. It’s the first comprehensive experimental study of the psychological factors that motivate Americans to believe the lies. The findings are published in the American Psychological Association’s Journal of Experimental Psychology: General.

In four separate experiments (three before the election and one after), the researchers looked at both conscious and unconscious acceptance of political smears by mostly white, non-Muslim college students. For the conscious trials, the participants were shown false blog reports arguing that Obama is a Muslim or a socialist or that John McCain is senile. The unconscious trials involved gauging how rapidly subjects could identify smear-relevant words such as “Muslim” or “turban” after Obama’s name was presented subliminally.

Among the results:

• On average, participants who supported McCain said there is a 56 percent likelihood Obama is a Muslim. But when they were asked to fill out a demographic card asking for their own race, the likelihood jumped to 77 percent. Kosloff said this shows that simply thinking about a social category that differentiated participants from Obama was enough to get them to believe the smear.

• Participants undecided about the candidates said there is a 43 percent chance McCain is senile – a number that increased to 73 percent when they simply listed their own age on a card.

• Undecided participants said there is a 25 percent chance Obama is a socialist – a number that jumped to 62 percent when they considered race. “Even though being a socialist has nothing to do with race,” Kosloff said, “irrationally they tied the two together.”

Kosloff said the increase in belief that Obama is Muslim likely reflects a growing disenchantment with his presidency – a sense that people feel Obama is not on their side.

“When people are unsatisfied with the president – whether it’s the way he’s handling the economy, health care or Afghanistan – our research suggests that this only fuels their readiness to accept untrue rumors,” Kosloff said.

“As his job rating goes down, suggesting that people feel like he’s not ideologically on their side, we see an increase in this irrational belief that he’s a Muslim,” he added. “Unfortunately, in America, many people dislike Muslims so they’ll label Obama as Muslim when they feel different from him.”

The study was done with researchers from the University of Arizona, the University of British Columbia and Leiden University in the Netherlands.

###

Michigan State University has been advancing knowledge and transforming lives through innovative teaching, research and outreach for more than 150 years. MSU is known internationally as a major public university with global reach and extraordinary impact. Its 17 degree-granting colleges attract scholars worldwide who are interested in combining education with practical problem solving.