Scientists have been theorising about its existence since 2013,
but now it’s finally here in physical form: stanene, an intriguing new
material made up of a one-atom thick mesh of tin. Just like its
much-hyped cousin, graphene - which is made of one-atom thick layers of
carbon - computer models predict that stanene could conduct electricity
without heat loss.

In fact, theories predict that stanene could be
the most efficient material ever made when it comes to conducting
electricity. Being two-dimensional, the material allows electrons to
zoom along the edges of the mesh in a single lane, bypassing the
energy-sapping collisions that occur in three-dimensional materials to
achieve 100 percent efficiency. And theoretically, it should work at
room temperature.

"It's
surprising that it can work at such a high temperature," one of the
team, physicist Shou-Cheng Zhang from Stanford University in the US, told Charles Q. Choi at Scientific American back in 2013.
"Scientists have looked for dissipationless transport of electricity
for many years, but usually the systems we find only work under extreme
conditions, either very low temperature or strong magnetic fields."

Because
stanene could potentially allow electrons to travel uninterrupted -
collisions cause vibrations that generate heat and result in energy loss
- wire made from this new material could carry electricity across great
distances for long periods of time without energy loss. Imagine your
smartphone, laptop, and chargers working for hours without ever getting
hot.

If stanene lives up to the predictions that it can work at
room temperature with 100 percent efficiency, it will be the perfect
example of a topological insulator, overtaking graphene as the best new material to build the electronics of the future.

But
that’s a big "if". So far, the team that produced the material has not
been able to confirm that stanene has any of these properties at all,
and the problem lies in the way that Zhang and his colleagues
constructed it.

"The researchers vaporised a bit of tin inside of
a vacuum chamber, allowing it to form its characteristic mesh on a
bismuth telluride surface," a university press release explains.
"The team was able to see only the top ridges of the structure with a
scanning tunneling microscope, however, and believe the substrate
interacted with the mesh, preventing conductivity testing."

Physicist Ralph Claessen from the University of Würzburg in Germany, who was not involved in the study, told Chris Cesare at Nature Magazine that
he’s not even sure that what Zhang and his colleagues produced is
actually stanene, because the entire structure of the tiny sample can’t
be seen in its entirety.

Back in 2013, theories predicted
that stanene’s two-dimensional tin lattice would form what’s known as a
buckled honeycomb structure, which must include "alternate atoms folding
upwards to form corrugated ridges", says Cesare.
Right now, Zhang and his team can only see the upper ridge of these
alternating atom, which means they cannot confirm that it is a true
buckled honeycomb structure, but they told Nature Magazine that the distance between the ridges they can see correspond to what a buckled honeycomb structure should look like.

There
are a couple of options going forward, the team can work with the tiny
sample they have now, and try confirming its structure using X-ray
diffraction. Or they can work on producing a larger sample so the
arrangement can be more easily seen. Then they will have to figure out
which materials can be used instead of bismuth telluride so they can
actually perform conductivity testing. The team has summarised the
findings in Nature Materials.

We’re not there yet, but it’s a start, and that’s exciting enough in itself, says graphene expert, Guy Le Lay, from Aix-Marseille University in France: "It's like going to the Moon. The first step is the crucial step."

Friday, August 7, 2015

Variation of tree ring width translated into summer temperature anomalies for the past 7000 years, based on samples from holocene deposits on Yamal Peninsula and Siberian now living conifers.[1]

Dendroclimatology is the science of determining past climates from trees (primarily properties of the annual tree rings). Tree rings are wider when conditions favor growth, narrower when times are difficult. Other properties of the annual rings, such as maximum latewood density (MXD) have been shown to be better proxies than simple ring width. Using tree rings, scientists have estimated many local climates for hundreds to thousands of years previous. By combining multiple tree-ring studies (sometimes with other climate proxy records), scientists have estimated past regional and global climates (see Temperature record of the past 1000 years).

Advantages

Tree rings are especially useful as climate proxies in that they can be well-dated (via matching of the rings from sample to sample, i.e. dendrochronology). This allows extension backwards in time using deceased tree samples, even using samples from buildings or from archeological digs. Another advantage of tree rings is that they are clearly demarked in annual increments, as opposed to other proxy methods such as boreholes. Furthermore, tree rings respond to multiple climatic effects (temperature, moisture, cloudiness), so that various aspects of climate (not just temperature) can be studied. However, this can be a double-edged sword as discussed in Climate factors.

Limitations

Along with the advantages of dendroclimatology are some limitations: confounding factors, geographic coverage, annular resolution, and collection difficulties. The field has developed various methods to partially adjust for these challenges.

Confounding factors

There are multiple climate and non-climate factors as well as nonlinear effects that impact tree ring width. Methods to isolate single factors (of interest) include botanical studies to calibrate growth influences and sampling of "limiting stands" (those expected to respond mostly to the variable of interest).

Climate factors

Climate factors that affect trees include temperature, precipitation, sunlight, and wind. To differentiate among these factors, scientists collect information from "limiting stands". An example of a limiting stand is the upper elevation treeline: here, trees are expected to be more affected by temperature variation (which is "limited") than precipitation variation (which is in excess). Conversely, lower elevation treelines are expected to be more affected by precipitation changes than temperature variation. This is not a perfect work-around as multiple factors still impact trees even at the "limiting stand", but it helps. In theory, collection of samples from nearby limiting stands of different types (e.g. upper and lower treelines on the same mountain) should allow mathematical solution for multiple climate factors. However, this method is rarely used.

Non-climate factors

Non-climate factors include soil, tree age, fire, tree-to-tree competition, genetic differences, logging or other human disturbance, herbivore impact (particularly sheep grazing), pest outbreaks, disease, and CO2 concentration. For factors which vary randomly over space (tree to tree or stand to stand), the best solution is to collect sufficient data (more samples) to compensate for confounding noise. Tree age is corrected for with various statistical methods: either fitting spline curves to the overall tree record or using similar aged trees for comparison over different periods (regional curve standardization). Careful examination and site selection helps to limit some confounding effects, for example picking sites undisturbed by modern man.

Non-linear effects

In general, climatologists assume a linear dependence of ring width on the variable of interest (e.g. moisture). However, if the variable changes enough, response may level off or even turn opposite. The home gardener knows that one can underwater or overwater a house plant. In addition, it is possible that interaction effects may occur (for example "temperature times precipitation" may affect growth as well as temperature and precipitation on their own. Here, also, the "limiting stand" helps somewhat to isolate the variable of interest. For instance, at the upper treeline, where the tree is "cold limited", it's unlikely that nonlinear effects of high temperature ("inverted quadratic") will have numerically significant impact on ring width over the course of a growing season.

Botanical inferences to correct for confounding factors

Botanical studies can help to estimate the impact of confounding variables and in some cases guide corrections for them. These experiments may be either ones where growth variables are all controlled (e.g. in a greenhouse[citation needed]), partially controlled (e.g. FACE [Free Airborne Concentration Enhancement] experiments—add ref), or where conditions in nature are monitored. In any case, the important thing is that multiple growth factors are carefully recorded to determine what impacts growth. (Insert Fennoscandanavia paper reference). With this information, ring width response can be more accurately understood and inferences from historic (unmonitored) tree rings become more certain. In concept, this is like the limiting stand principle, but it is more quantitative—like a calibration.

Divergence problem

The divergence problem is the disagreement between the temperatures measured by the thermometers (instrumental temperatures) on one side, and the temperatures reconstructed from the latewood density or width of tree rings on the other side, at many treeline sites in northern forests.

While the thermometer records indicate a substantial warming trend, tree rings from these particular sites do not display a corresponding change in their maximum latewood density or, in some cases, their width. This does not apply to all such studies.[2] Where this applies, a temperature trend extracted from tree rings alone would not show any substantial warming. The temperature graphs calculated from instrumental temperatures and from these tree ring proxies thus "diverge" from one another since the 1950s, which is the origin of the term. This divergence raises obvious questions of whether other, unrecognized divergences have occurred in the past, prior to the era of thermometers. [3] There is evidence suggesting that the divergence is caused by human activities, and so confined to the recent past, but use of affected proxies can lead to overestimation of past temperatures, understating the current warming trend. There is continuing research into explanations and ways to avoid this problem with tree ring proxies.[2]

Geographic coverage

Trees do not cover the Earth. Polar and marine climates cannot be estimated from tree rings. In perhumid tropical regions, Australia and southern Africa, trees generally grow all year round and don't show clear annual rings. In some forest areas, the tree growth is too much influenced by multiple factors (no "limiting stand") to allow clear climate reconstruction[examples needed]. The coverage difficulty is dealt with by acknowledging it and by using other proxies (e.g. ice cores, corals) in difficult areas. In some cases it can be shown that the parameter of interest (temperature, precipitation, etc.) varies similarly from area to area, for example by looking at patterns in the instrumental record. Then one is justified in extending the dendroclimatology inferences to areas where no suitable tree ring samples are obtainable.

Annular resolution

Tree rings show the impact on growth over an entire growing season. Climate changes deep in the dormant season (winter) will not be recorded. In addition, different times of the growing season may be more important than others (i.e. May versus September) for ring width. However, in general the ring width is used to infer the overall climate change during the corresponding year (an approximation). Another problem is "memory" or autocorrelation. A stressed tree may take a year or two to recover from a hard season. This problem can be dealt with by more complex modeling (a "lag" term in the regression) or by reducing the skill estimates of chronologies.

Collection difficulties

Tree rings must be obtained from nature, frequently from remote regions. This means that special efforts are needed to map sites properly. In addition, samples must be collected in difficult (often sloping terrain) conditions. Generally, tree rings are collected using a hand-held borer device, that requires skill to get a good sample. The best samples come from felling a tree and sectioning it. However, this requires more danger and does damage to the forest. It may not be allowed in certain areas, particularly with the oldest trees in undisturbed sites (which are the most interesting scientifically). As with all experimentalists, dendroclimatologists must, at times, decide to make the best of imperfect data, rather than resample. This tradeoff is made more difficult, because sample collection (in the field) and analysis (in the lab) may be separated significantly in time and space. These collection challenges mean that data gathering is not as simple or cheap as conventional laboratory science. However, they also give the field's practitioners much enjoyment, working out of doors, with hands on trees and tools.

Other measurements

Initial work focused on measuring the tree ring width—this is simple to measure and can be related to climate parameters. But the annual growth of the tree leaves other traces. In particular maximum latewood density (MXD) is another metric used for estimating environmental variables.[4] It is, however, harder to measure. Other properties (e.g. isotope or chemical trace analysis) have also been tried most notably by L. M. Libby in her 1974 paper "Temperature Dependence of Isotope Ratios in Tree Rings".[5] In theory, multiple measurements on the same ring will allow differentiation of confounding factors (e.g. precipitation and temperature). However, most studies are still based on ring widths at limiting stands.

Measuring radiocarbon concentrations in tree rings has proven to be useful in recreating past sunspot activity, with data now extending back over 11,000 years.[6]

A field of geysers called El Tatio located in northern Chile's Andes Mountains.Credit: Gerald Prins

How did life on Earth begin? It's been one of modern biology's greatest
mysteries: How did the chemical soup that existed on the early Earth
lead to the complex molecules needed to create living, breathing
organisms? Now, researchers say they've found the missing link.

Between 4.6 billion and 4.0 billion years ago, there was probably no
life on Earth. The planet's surface was at first molten and even as it
cooled, it was getting pulverized by asteroids and comets. All that
existed were simple chemicals. But about 3.8 billion years ago, the
bombardment stopped, and life arose. Most scientists think the "last universal common ancestor" — the creature from which everything on the planet descends — appeared about 3.6 billion years ago.

But exactly how that creature arose has long puzzled scientists. For
instance, how did the chemistry of simple carbon-based molecules lead to
the information storage of ribonucleic acid, or RNA?
The RNA molecule must store information to code for proteins. (Proteins
in biology do more than build muscle — they also regulate a host of
processes in the body.)

The new research — which involves two studies, one led by Charles
Carter and one led by Richard Wolfenden, both of the University of North
Carolina — suggests a way for RNA to control the production of proteins
by working with simple amino acids that does not require the more
complex enzymes that exist today. [7 Theories on the Origin of Life on Earth]

Missing RNA link

This link would bridge this gap in knowledge between the primordial
chemical soup and the complex molecules needed to build life. Current
theories say life on Earth started in an "RNA world,"
in which the RNA molecule guided the formation of life, only later
taking a backseat to DNA, which could more efficiently achieve the same
end result. Like DNA, RNA is a helix-shaped molecule that can store or
pass on information. (DNA is a double-stranded helix, whereas RNA is
single-stranded.) Many scientists think the first RNA molecules existed
in a primordial chemical soup — probably pools of water on the surface of Earth billions of years ago. [Photo Timeline: How the Earth Formed]

The idea was that the very first RNA molecules formed from collections
of three chemicals: a sugar (called a ribose); a phosphate group, which
is a phosphorus atom connected to oxygen atoms; and a base, which is a
ring-shaped molecule of carbon, nitrogen, oxygen and hydrogen atoms. RNA
also needed nucleotides, made of phosphates and sugars.

The question: How did the nucleotides come together within the soupy
chemicals to make RNA? John Sutherland, a chemist at the University of
Cambridge in England, published a study in May in the journal Nature Chemistry that showed that a cyanide-based chemistry could make two of the four nucleotides in RNA and many amino acids.

That still left questions, though. There wasn't a good mechanism for putting nucleotides
together to make RNA. Nor did there seem to be a natural way for amino
acids to string together and form proteins. Today, adenosine
triphosphate (ATP) does the job of linking amino acids into proteins,
activated by an enzyme called aminoacyl tRNA synthetase. But there's no
reason to assume there were any such chemicals around billions of years
ago.

Also, proteins have to be shaped a certain way in order to function
properly. That means RNA has to be able to guide their formation — it
has to "code" for them, like a computer running a program to do a task.

Carter noted that it wasn't until the past decade or two that
scientists were able to duplicate the chemistry that makes RNA build
proteins in the lab. "Basically, the only way to get RNA was to evolve
humans first," he said. "It doesn't do it on its own."

Perfect sizes

In one of the new studies, Carter looked at the way a molecule called
"transfer RNA," or tRNA, reacts with different amino acids.

They found that one end of the tRNA could help sort amino acids
according to their shape and size, while the other end could link up
with amino acids of a certain polarity. In that way, this tRNA molecule
could dictate how amino acids come together to make proteins, as well as
determine the final protein shape. That's similar to what the ATP
enzyme does today, activating the process that strings together amino
acids to form proteins.

Carter told Live Science that the ability to discriminate according to
size and shape makes a kind of "code" for proteins called peptides,
which help to preserve the helix shape of RNA.

"It's an intermediate step in the development of genetic coding," he said.

In the other study, Wolfenden and colleagues tested the way proteins
fold in response to temperature, since life somehow arose from a proverbial boiling pot of chemicals on early Earth.
They looked at life's building blocks, amino acids, and how they
distribute in water and oil — a quality called hydrophobicity. They
found that the amino acids' relationships were consistent even at high
temperatures — the shape, size and polarity of the amino acids are what
mattered when they strung together to form proteins, which have
particular structures.

"What we're asking here is, 'Would the rules of folding have been
different?'" Wolfenden said. At higher temperatures, some chemical
relationships change because there is more thermal energy. But that
wasn't the case here.

By showing that it's possible for tRNA to discriminate between
molecules, and that the links can work without "help," Carter thinks
he's found a way for the information storage of chemical structures like
tRNA to have arisen — a crucial piece of passing on genetic traits.
Combined with the work on amino acids and temperature, it offers insight into how early life might have evolved.

This work still doesn't answer the ultimate question of how life began,
but it does show a mechanism for the appearance of the genetic codes
that pass on inherited traits, which got evolution rolling.

Jesse Emspak

Jesse Emspak is a contributing writer for Live Science, Space.com
and Toms Guide. He focuses on physics, human health and general science.
Jesse has a Master of Arts from the University of California, Berkeley
School of Journalism, and a Bachelor of Arts from the University of
Rochester. Jesse spent years covering finance and cut his teeth at local
newspapers, working local politics and police beats. Jesse likes to
stay active and holds a third degree black belt in Karate, which just
means he now knows how much he has to learn.

Thursday, August 6, 2015

Film footage taken in Hiroshima in March 1946 showing victims with severe burns

In the spring of 1948, the Atomic Bomb Casualty Commission (ABCC) was established in accordance with a presidential directive from Truman to the National Academy of Sciences – National Research Council to conduct investigations of the late effects of radiation among the survivors in Hiroshima and Nagasaki.[236] One of the early studies conducted by the ABCC was on the outcome of pregnancies occurring in Hiroshima and Nagasaki, and in a control city, Kure, located 18 mi (29 km) south of Hiroshima, in order to discern the conditions and outcomes related to radiation exposure.[237] Dr. James V. Neel led the study which found that the number of birth defects was not significantly higher among the children of survivors who were pregnant at the time of the bombings.[238]
The National Academy of Sciences questioned Neel's procedure which did
not filter the Kure population for possible radiation exposure.[239] Among the observed birth defects there was a higher incidence of brain malformation in Nagasaki and Hiroshima, including microencephaly and anencephaly, about 2.75 times the rate seen in Kure.[240][241]

In 1985, Johns Hopkins University human geneticist James F. Crow examined Neel's research and confirmed that the number of birth defects was not significantly higher in Hiroshima and Nagasaki.[242] Many members of the ABCC and its successor Radiation Effects Research Foundation
(RERF) were still looking for possible birth defects or other causes
among the survivors decades later, but found no evidence that they were
common among the survivors.[243][244] Despite the insignificance of birth defects found in Neel's study, historian Ronald E. Powaski wrote that Hiroshima experienced "an increase in stillbirths, birth defects, and infant mortality" following the atomic bomb.[245]
Neel also studied the longevity of the children who survived the
bombings of Hiroshima and Nagasaki, reporting that between 90 and 95
percent were still living 50 years later.[243]

Around 1,900 cancer deaths can be attributed to the after-effects of the bombs. An epidemiology study by the RERF states that from 1950 to 2000, 46% of leukemia deaths and 11% of solid cancer deaths among the bomb survivors were due to radiation from the bombs, the statistical excess being estimated at 200 leukemia and 1,700 solid cancers.[246]

About Me

My formal training is in chemistry. I also read a great deal of physics and biology. In fact I very much enjoy reading in general, mostly science, but also some fiction and history. I also enjoy computer programming and writing. I like hiking and exploring nature. I also enjoy people; not too much in social settings, but one on one; also, people with interesting or "off-beat" minds draw me to them. I also have some interest in Buddhism.

These days I get a lot more information from the internet, primarily through Wiki. Some television, e. g., documentaries, PBS shows like "Nova" and "Nature".

My favorite science writers are Jacob Bronowski ("The Ascent of Man") and Richard Dawkins (his "The Blind Watchmaker" is right up there up Ascent). I also have a favorite writer on Buddhism, Pema Chodron. Favorite films are "Annie Hall" (by Woody Allen), "The Maltese Falcon", "One Flew Over The Cuckoo's Nest", "As Good As It Gets", "Conspiracy Theory", Monty Python's "Search For The Holy Grail" and "Life of Brian", and a few others which I can't think about at the moment.

I love a number of classical works (Beethoven's "Pastoral", "Afternoon Of A Fawn" and "Clair De Lune" by Debussey , Pachelbel's "Canon" come to mind. My favorite piece is probably Gershwin's "Rhapsody in Blue". But I also enjoy a great deal in modern music, including many jazz pieces, folk songs by people like Dylan, Simon and Garfunkel, a hodgepodge of pieces by Crosby, Stills, and Nash, Niel Young, and practically everything the Beatles wrote.

My life over the last few years has been in some disarray, but I am finally "getting it together.". As I am very much into the sciences and writing, I would like to move more in this direction. I also enjoy teaching. As for my political leanings, most people would probably describe as basically liberal, though not extremely so. My religious leanings are to the absolutely none: I've alluded to my interest in Buddhism, but again this is not any supernatural or scientifically untested aspect of it but in the way it provides a powerful philosophy and set of practical, day to day methods of dealing with myself and the other human beings.