I’m a Scientist, Get me out of Here! is a bit like an X Factor for scientists. Scientists put up information about themselves and their work on our site. Teenagers in their science classes read about the scientists, ask them questions, and then vote for which scientist they think should get a prize of £500. It’s funded by the Wellcome Trust and the Institute of Physics.

The results are in. Over the last two weeks 2785 questions have been asked by 2296 students and answered by 30 intrepid scientists.

The winners of I’m a Scientist (one from each of the six sub-groups) are -

Argon: Julian Rayner (Wellcome Trust Sanger Institute)

Chlorine: Murray Collins (Institute of Zoology – London School of Economics)

Nerve-cell tendrils readily thread their way through tiny semiconductor tubes, researchers find, forming a crisscrossed network like vines twining toward the sun. The discovery that offshoots from nascent mouse nerve cells explore the specially designed tubes could lead to tricks for studying nervous system diseases or testing the effects of potential drugs. Such a system may even bring researchers closer to brain-computer interfaces that seamlessly integrate artificial limbs or other prosthetic devices.

“This is quite innovative and interesting,” says nanomaterials expert Nicholas Kotov of the University of Michigan in Ann Arbor. “There is a great need for interfaces between electronic and neuronal tissues.”

To lay the groundwork for a nerve-electronic hybrid, graduate student Minrui Yu of the University of Wisconsin–Madison and his colleagues created tubes of layered silicon and germanium, materials that could insulate electric signals sent by a nerve cell. The tubes were various sizes and shapes and big enough for a nerve cell’s extensions to crawl through but too small for the cell’s main body to get inside.

When the team seeded areas outside the tubes with mouse nerve cells, the cells went exploring, sending their threadlike projections into the tubes and even following the curves of helical tunnels, the researchers report in an upcoming ACS Nano.

“They seem to like the tubes,” says biomedical engineer Justin Williams, who led the research. The approach offers a way to create elaborate networks with precise geometries, says Williams. “Neurons left to their own devices will kind of glom on to one another or connect randomly to other cells, neither of which is a good model for how neurons work.”

Image: Minrui Yu

At this stage, the researchers have established that nerve cells are game for exploring the tiny tubes, which seem to be biologically friendly, and that the cell extensions will follow the network to link up physically. But it isn’t clear if the nerves are talking to each other, sending signals the way they do in the body. Future work aims to get voltage sensors and other devices into the tubes so researchers can eavesdrop on the cells. The confining space of the little tunnels should be a good environment for listening in, perhaps allowing researchers to study how nerve cells respond to potential drugs or to compare the behavior of healthy neurons with malfunctioning ones such as those found in people with multiple sclerosis or Parkinson’s.

Eventually, the arrangement may make it easier to couple living cells with technology on a larger scale, but getting there is no small task, says neuroengineer Ravi Bellamkonda of the Georgia Institute of Technology in Atlanta.

“There’s a lot of nontrivial engineering that has to happen, that’s the real challenge,” says Bellamkonda. “It’s really cool engineering, but what it means for neuroscience remains to be seen.”

Scientists have developed a new way to manipulate atoms inside diamond crystals so that they store information long enough to function as quantum memory, which encodes information not as the 0s and 1s crunched by conventional computers but in states that are both 0 and 1 at the same time. Physicists use such quantum data to send information securely, and hope to eventually build quantum computers capable of solving problems beyond the reach of today’s technology.

For those developing this quantum memory, the perfect diamonds don’t come from Tiffany & Co. — or Harry Winston, for that matter. Impurities are the key to the technology.

“Oddly enough, perfection may not be the way to go,” said David Awschalom of the University of California, Santa Barbara. “We want to build in defects.”

One of the most common defects in diamond is nitrogen, which turns the stone yellow. When a nitrogen atom sits next to a vacant spot in the carbon crystal, the intruding element provides an extra electron that moves into the hole. Several years ago, scientists learned how to change the spin of such electrons using microwave energy and put them to work as quantum bits, or qubits.

In search of a more stable way to store quantum information, Awschalom has now figured out how to link the spin of a electron to the spin of the nearby nitrogen’s nucleus. This transfer, triggered by magnetic fields, is fast — about 100 nanoseconds, comparable to how long it takes to store information on a stick of RAM.

The technique has “a fidelity of 85 to 95 percent,” Awschalom said March 22 in Dallas at a meeting for the American Physical Society.

In contrast to some other quantum systems under development, which require temperatures close to absolute zero, this diamond memory works at room temperature. The spins inside the diamond can be both changed and measured by shining laser light into the diamond. This could make diamond an attractive material for scientists developing nanophotonic systems designed to move and store information in packets of light.

Unlike a diamond itself, this quantum memory isn’t forever. But it lasts for a very long time by quantum standards. The nuclear spin remains coherent for more than a millisecond, with the potential to improve to seconds.

“You can only do your quantum magic as long as you have coherence,” said Sebastian Loth, a physicist at IBM’s Almaden Research Center in San Jose, Calif. “If you have a lifetime of milliseconds, that lets you do millions of operations.”

In addition to stability, diamond may also overcome another hurdle that has faced quantum computing — it can be scaled up to larger sizes. In a paper published last year in Nano Letters, Awschalom developed a technique for creating customizable patterns of nitrogen atoms inside a diamond, using lasers to implant thousands of atoms in a grid.

Awschalom’s diamond quantum memory could also be useful for building large quantum networks. Currently, quantum information is transmitted by connecting, or entangling, qubits. This scheme is limited to distances of kilometers. Quantum repeaters could potentially use small chips of diamond to catch, store and retransmit this information to extend the range, enabling quantum networks to work over much longer distances.

At the recent CeBIT Fair in Hanover, Germany, IBM CEO Sam Palmisano announced that IBM’s 3D technology will likely appear in its Power8 processor, planned for 2013, using 28nm or 22nm process technology.

The first goal, he indicated, is to place the memory directly above or beneath the CPU. The processor will likely employ a linked memory and “a layer of small specialized computing cores adapted for specific intended uses.” Future plans envision up to 100,000 connections per mm2 in silicon.

The principal issue with 3D ICs for processor units is cooling. IBM has undertaken 3D stack cooling research with the École Polytechnique Federale and ETH Zurich. At the CeBIT fair, IBM scientists presented the first test chips in which cooling water is circulated through 50µm channels (i.e microchannel cooling). Bruno Michel, manager of the Advanced Thermal Packaging group at IBM Research Zurich, reported that the energy-efficient hot water-cooling technology is part of their concept of a zero-emission data center. 3D chip stacks which generate more heat than a single processor, in almost the same amount of space, are cooled using water and not air to reduce energy consumption. IBM reports that it will be a few more years before this technology is ready for production.

While wandering lonely as a cloud, William Wordsworth should have stopped to wonder how those daffodils bloomed.

Not content to just watch flowers dance in the breeze, Harvard physicists have described for the first time how flowers generate the forces needed to curl open come springtime. In the asiatic lily (Lilium casablanca), this poetic blossoming is driven by skewed growth at the edges of petals, the team reports online Mar. 21 in the Proceedings of the National Academy of Sciences.

Over four and a half days, the asiatic lily’s young buds slowly suck up water, growing until they’re ready to explode. The petals and sepals — the outer, greener portion of a flower — gradually invert, then peel open like a banana and form a blossom.

When it comes to plant motion, the slow emergence of the lily flower is a far cry from the quick closing of the Venus flytrap, says Jan Skotheim, a biologist at Stanford University. “In the blooming lily, you don’t have an explosive snap,” he says. Skotheim and L. Mahadevan, a Harvard physicist and coauthor of the new study on lilies, discovered the biophysical mechanism underlying the flytrap’s snare in 2005.

But both blooming and snapping work because plants build up “instabilities,” Skotheim says. Instabilities that shape roots, stems and lily blossoms often form when certain cells elongate more than others. Too much growth causes strain, which bends thin tissues like a fish tugging on a fishing pole.

In the studies of the lily, exactly which cells were tugging on which wasn’t clear. The Harvard team’s first clue to the mechanism was that the outer margins of petals and sepals ruffled during blooming, while inner surfaces stayed smooth. Those wavy patterns hinted that cells might be growing faster at the edges, similar to adding slack to a rope. That excess growth could, potentially, coax the petal to go from curving in inside the bud to curving out. “Because it’s only growing at the edge and not the middle,” Skotheim says, “you get a mismatch of strain.”

The Harvard team surgically removed the edges of lily petals and sepals, and found that the remaining flower parts didn’t curl out with their usual elegance. The researchers also developed a mathematical model to demonstrate how extra edge strain could warp thin materials like flower petals. This strain doesn’t only open petals up but also curls their edges up like a smile. This mechanism may hold for other lilies, suggests study coauthor Haiyi Liang. “But beyond that, say roses, we are not sure,” says Liang, who is now at the University of Science and Technology of China in Hefei.

Mathematical models have been a boon for researchers studying the inner workings of plants, says Wendy Silk, a plant physiologist at the University of California, Davis. Models similar to those employed in this study have shown how grass blades twist to protect themselves from the sun and how kelp fronds develop ruffled edges, too. To explore the processes that trigger blooming or grass twisting, scientists first need to grasp the basic architectural rules, she says.

Numbers work like this may have applications down the road, Skotheim says. But he thinks it’s exciting just to learn something new and basic about something so familiar. “You want to understand why flowers look the way flowers do,” he says.

In other words, Wordsworth may have been missing out; there’s a lot more to flowers than just their superficial appearance. “Infusing a scientific aesthetic into a thing of beauty only enhances our appreciation of it,” Mahadevan says. “This is what we try to do as scientists.”

At the atomic scale, University of Michigan researchers have for the first time mapped the polarization of a cutting-edge material for memory chips. Credit: Chris Nelson and Xiaoqing Pan

Engineering researchers at the University of Michigan have found a way to improve the performance of ferroelectric materials, which have the potential to make memory devices with more storage capacity than magnetic hard drives and faster write speed and longer lifetimes than flash memory.

In ferroelectric memory the direction of molecules’ electrical polarization serves as a 0 or a 1 bit. An electric field is used to flip the polarization, which is how data is stored.

With his colleagues at U-M and collaborators from Cornell University, Penn State University, and University of Wisconsin, Madison, Xiaoqing Pan, a professor in the U-M Department of Materials Science and Engineering, has designed a material system that spontaneously forms small nano-size spirals of the electric polarization at controllable intervals, which could provide natural budding sites for the polarization switching and thus reduce the power needed to flip each bit.

“To change the state of a ferroelectric memory, you have to supply enough electric field to induce a small region to switch the polarization. With our material, such a nucleation process is not necessary,” Pan said. “The nucleation sites are intrinsically there at the material interfaces.”

To make this happen, the engineers layered a ferroelectric material on an insulator whose crystal lattices were closely matched. The polarization causes large electric fields at the ferroelectric surface that are responsible for the spontaneous formation of the budding sites, known as “vortex nanodomains.”

The researchers also mapped the material’s polarization with atomic resolution, which was a key challenge, given the small scale. They used images from a sub-angstrom resolution transmission electron microscope at Lawrence Berkeley National Laboratory. They also developed image processing software to accomplish this.

“This type of mapping has never been done,” Pan said. “Using this technique, we’ve discovered unusual vortex nanodomains in which the electric polarization gradually rotates around the vortices.”

A paper on the research, titled “Spontaneous Vortex Nanodomain Arrays at Ferroelectric Heterointerfaces” is available online at NanoLetters.

This research is funded by the Department of Energy, the National Science Foundation and the U.S. Army Research Office.

Physicists at the Max Planck Institute of Quantum Optics succeeded in manipulating atoms individually in a lattice of light and in arranging them in arbitrary patterns. These results are an important step towards large scale quantum computing and for the simulation of condensed matter systems.

With the help of a laser beam, the scientists could address single atoms in the lattice of light and change their spin state. In this way they succeeded in having total control over the single atoms and in "writing" arbitrary two-dimensional patterns.

Physicists around the world are searching for the best way to realize a quantum computer. Now scientists of the team around Stefan Kuhr and Immanuel Bloch at the Max Planck Institute of Quantum Optics (Garching/Munich) took a decisive step in this direction. They could address and change the spin of single atoms with laser light and arrange them in arbitrary patterns (Nature 471, p. 319 (2011), DOI: 10.1038/nature09827). In this way, the physicists strung the atoms along a line and could directly observe their tunnelling dynamics in a “racing duel” of the atoms. A register of hundreds of addressable quantum particles could serve for storing and processing of quantum information in a quantum computer.

In the present experiment, the scientists load laser-cooled rubidium atoms into an artificial crystal of light. These so-called optical lattices are generated by superimposing several laser beams. The atoms are kept in the lattice of light similar to marbles in the hollows of an egg carton.

Already a few months ago, the team of Stefan Kuhr and Immanuel Bloch showed that each site of the optical lattice can be filled with exactly one atom. With the help of a microscope, the scientists visualized atom by atom and thereby verified the shell-like structure of this “Mott insulator”. Now the scientists succeeded in individually addressing the atoms in the lattice and in changing their respective energy state. Using the microscope, they focused a laser beam down to a diameter of about 600 nanometers, which is just above the lattice spacing, and directed it at individual atoms with high precision.

The laser beam slightly deforms the electron shell of the addressed atom and thereby changes the energy difference between its two spin states. Atoms with a spin – i.e. an intrinsic angular momentum – behave like little magnetic needles that can align in two opposite directions. If the atoms are irradiated with microwaves that are in resonance with the modified spin transition, only the addressed atoms absorb a microwave photon, which causes their spin to flip. All other atoms in the lattice remain unaffected by the microwave field.

With the addressing scheme arbitrary patterns of atoms in the lattice can be prepared. The atomic patterns each consist of 10 - 30 single atoms that are kept in an artificial crystal of light. (High resolution images available online at www.quantum-munich.de/media)

The scientists demonstrated the high fidelity of this addressing scheme in a series of experiments. For this purpose, the spins of all atoms along a line were flipped one after the other, by moving the addressing laser from lattice site to lattice site. After removing all atoms with a flipped spin from the trap, the addressed atoms are visible as holes, which can easily be counted. In this way, the physicists deduced that the addressing worked in 95% of the cases. Atoms at the neighbouring sites are not influenced by the addressing laser. The method provides the possibility to generate arbitrary distributions of atoms in the lattice (see figures).

Starting from an arrangement of 16 atoms that were strung together on neighbouring lattice sites like a necklace of beads, the scientists studied what happens when the height of the lattice is ramped down so far that the particles are allowed to “tunnel” according to the rules of quantum mechanics. They move from one lattice site to the other, even if their energy is not sufficient to cross the barrier between the lattice wells. “As soon as the height of the lattice has reached the point where tunnelling is possible, the particles start running as if they took part in a horse-race”, doctoral candidate Christof Weitenberg describes. “By taking snapshots of the atoms in the lattice at different times after the “starting signal”, we could directly observe the quantum mechanical tunnelling-effect of single massive particles in an optical lattice for the first time”.

The new addressing technique allows many interesting studies of the dynamics of collective quantum states, as they appear in solid state systems. It also opens new perspectives in quantum information processing. “A Mott isolator with exactly one atom per lattice site acts as a natural quantum register with a few hundred quantum bits, the ideal starting point for scalable quantum information processing” as Stefan Kuhr explains. “We have shown that we can individually address single atoms. In order for the atom to suit as a quantum bit, we need to generate coherent superpositions of its two spin states. A further step is to realize elementary logical operations between two selected atoms in the lattice, so-called quantum gates.” [OM/SK].

Systems biology is the classic whole being greater than the sum of its parts, where researchers seek to understand how complex organisms emerge from the interactions of the individual elements that make up its constituent cells.

Systems biology is a holistic approach to the study of how a living organism emerges from the interactions of the individual elements that make up its constituent cells. Embracing a broad range of disciplines, this field of science that is just beginning to come into public prominence holds promise for advances in a number of important areas, including safer, more effective pharmaceuticals, improved environmental remediation, and clean, green, sustainable energy. However, the most profound impact of systems biology, according to one of its foremost practitioners, is that it might one day provide an answer to the central question: What is life?

Adam Arkin, director of the Physical Biosciences Division of the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory and a leading computational biologist, is the corresponding author of an essay in the journal Cell which describes in detail key technologies and insights that are advancing systems biology research. The paper is titled “Network News:Innovations in 21st Century Systems Biology.” Co-authoring the article is David Schaffer, a chemical engineer with Berkeley Lab’s Physical Biosciences Division. Both Arkin and Schaffer also hold appointments with the University of California (UC) Berkeley.

“System biology aims to understand how individual elements of the cell generate behaviors that allow survival in changeable environments, and collective cellular organization into structured communities,” Arkin says. “Ultimately, these cellular networks assemble into larger population networks to form large-scale ecologies and thinking machines, such as humans.”

In their essay, Arkin and Schaffer argue that the ideas behind systems biology originated more than a century ago and that the field should be viewed as “a mature synthesis of thought about the implications of biological structure and its dynamic organization.” Research into the evolution, architecture, and function of cells and cellular networks in combination with ever expanding computational power has led to predictive genome-scale regulatory and metabolic models of organisms. Today systems biology is ready to “bridge the gap between correlative analysis and mechanistic insights” that can transform biology from a descriptive science to an engineering science.

Adam Arkin (Photo by Roy Kaltschmidt, Berkeley Lab Public Affairs)

Discoveries in systems biology, the authors say, can generally be divided between those that relied on a “mechanistic approach to causal relationships,” and those that relied on “large-scale correlation analysis.” The results of these discoveries can also be categorized according to whether they primarily pertained to the principles behind cellular network organization, or to predictions about the behavior of these networks.

“As systems biology matures, the number of studies linking correlation with causation and principles with prediction will continue to grow,” Schaffer says. “Advances in measurement technologies that enable large-scale experiments across an array of parameters and conditions will increasingly meld these correlative and causal approaches, including correlative analyses leading to mechanistic hypothesis testing, as well as causal models empowered with sufficient data to make predictions.”

David Schaffer (Photo by Roy Kaltschmidt, Berkeley Lab Public Affairs)

As the complete genomes of more organisms are sequenced, and measurement and genetic manipulation technologies are improved, scientists will be able to compare systems across a broader expanse of phylogenetic trees. This, Arkin and Schaffer say, will enhance our understanding of mechanistic features that are necessary for function and evolution.

“The increasing integration of experimental and computational technologies will thus corroborate, deepen, and diversify the theories that the earliest systems biologists used logic to infer,” Arkin says. “This will thereby inch us ever closer to answering the What is Life question.”

The systems biology research cited in this essay by Arkin and Schaffer was supported by DOE’s Office of Science (Biological and Environmental Research), and by the National Institutes of Health.

Lawrence Berkeley National Laboratory is a U.S. Department of Energy (DOE) national laboratory managed by the University of California for the DOE Office of Science. Berkeley Lab provides solutions to the world’s most urgent scientific challenges including sustainable energy, climate change, human health, and a better understanding of matter and force in the universe. It is a world leader in improving our lives through team science, advanced computing, and innovative technology. Visit also the Lawrence Berkeley National Laboratory Website at www.lbl.gov

Additional Information

For more about Berkeley Lab’s Physical Biosciences Division, visit the Website at http://pbd.lbl.gov/

Northwestern University researchers have developed a new switching device that takes quantum communication to a new level. The device is a practical step toward creating a network that takes advantage of the mysterious and powerful world of quantum mechanics.

The researchers can route quantum bits, or entangled particles of light, at very high speeds along a shared network of fiber-optic cable without losing the entanglement information embedded in the quantum bits. The switch could be used toward achieving two goals of the information technology world: a quantum Internet, where encrypted information would be completely secure, and networking superfast quantum computers.

The device would enable a common transport mechanism, such as the ubiquitous fiber-optic infrastructure, to be shared among many users of quantum information. Such a system could route a quantum bit, such as a photon, to its final destination just like an e-mail is routed across the Internet today.

The research — a demonstration of the first all-optical switch suitable for single-photon quantum communications — is published by the journal Physical Review Letters.

“My goal is to make quantum communication devices very practical,” said Prem Kumar, AT&T Professor of Information Technology in the McCormick School of Engineering and Applied Science and senior author of the paper. “We work in fiber optics so that as quantum communication matures it can easily be integrated into the existing telecommunication infrastructure.”

The bits we all know through standard, or classical, communications only exist in one of two states, either “1” or “0.” All classical information is encoded using these ones and zeros. What makes a quantum bit, or qubit, so attractive is it can be both one and zero simultaneously as well as being one or zero. Additionally, two or more qubits at different locations can be entangled — a mysterious connection that is not possible with ordinary bits.

Researchers need to build an infrastructure that can transport this “superposition and entanglement” (being one and zero simultaneously) for quantum communications and computing to succeed.

The qubit Kumar works with is the photon, a particle of light. A photonic quantum network will require switches that don’t disturb the physical characteristics (superposition and entanglement properties) of the photons being transmitted, Kumar says. He and his team built an all-optical, fiber-based switch that does just that while operating at very high speeds.

To demonstrate their switch, the researchers first produced pairs of entangled photons using another device developed by Kumar, called an Entangled Photon Source. “Entangled” means that some physical characteristic (such as polarization as used in 3-D TV) of each pair of photons emitted by this device are inextricably linked. If one photon assumes one state, its mate assumes a corresponding state; this holds even if the two photons are hundreds of kilometers apart.

The researchers used pairs of polarization-entangled photons emitted into standard telecom-grade fiber. One photon of the pair was transmitted through the all-optical switch. Using single-photon detectors, the researchers found that the quantum state of the pair of photons was not disturbed; the encoded entanglement information was intact.

“Quantum communication can achieve things that are not possible with classical communication,” said Kumar, director of Northwestern’s Center for Photonic Communication and Computing. “This switch opens new doors for many applications, including distributed quantum processing where nodes of small-scale quantum processors are connected via quantum communication links.”

The National Science Foundation through their Integrative Graduate Education and Research Traineeship (IGERT) program supported the research.

Scientists have developed a programmable “molecular robot” — a sub-microscopic molecular machine made of synthetic DNA that moves between track locations separated by 6nm. The robot, a short strand of DNA, follows instructions programmed into a set of fuel molecules determining its destination, for example, to turn left or right at a junction in the track. The report, which represents a step toward futuristic nanomachines and nanofactories, appears in ACS’s Nano Letters.

Andrew Turberfield and colleagues point out that other scientists have developed similar DNA-based robots, which move autonomously. Some of these use a biped design and move by alternately attaching and detaching themselves from anchor points along the DNA track, foot over foot, when fuel is added. Scientists would like to program DNA robots to autonomously walk in different directions to move in a programmable pattern, a key to harnessing their potential as cargo-carrying molecular machines.

The scientists describe an advance toward this goal — a robot that can be programmed to choose among different branches of a molecular track, rather than just move in a straight line. The key to this specialized movement is a so-called “fuel hairpin,” a molecule that serves as both a chemical energy source for propelling the robot along the track and as a routing instruction. The instructions tell the robot which point is should move to next, allowing the selection between the left or right branches of a junction in the track, precisely controlling the route of the robot — which could potentially allow the transport of pharmaceuticals or other materials.

The authors acknowledged funding from the Engineering and Physical Sciences Research Council (EPSRC).