"This could be our black hole's biggest meal in hundreds of years," said Leo
Meyer, of the University of California, Los Angeles. "It might bring spectacular fireworks - and we want everybody to
watch."

The collision could give astronomers a unique window on one of the universe's
great enigmas. Black holes are so dense that not even light can escape them, once it passes
their event horizon (point of no return).

The heart of our Milky Way galaxy - a seven-year-long
X-ray by Swift. The black hole is hidden in the centre

They can only be observed indirectly - from brief flashes of radiation
released by matter falling in.

The giant gas cloud G2 is three times the mass of Earth. It was first spotted
in 2011 hurtling towards Sagittarius A* - the black hole in our galactic
core. Impact is now just a few months away. If the gas drifts close enough it will
heat up, releasing great flares of X-rays, which shed light on the black hole's
properties.

Astronomers have already secured front row seats. Dr Meyer's team is tracking
the cloud's approach using the Keck Observatory in Hawaii.

They can see it "stretching like spaghetti" as the black hole tugs at its
head, now moving much faster than its tail.

The gas cloud is being 'stretched like spaghetti' by
the black hole's gravity

And while Keck watches the cloud, Swift is watching the black hole. Nasa's
X-Ray space telescope is poised and primed to catch the first glimmers of an
encounter.

"Everyone wants to see the event happening because it's so rare," said
Nathalie Degenaar, Swift's principal investigator.

Sagittarius A* lurks 26,000 light years away in the Milky Way's innermost
region. Viewed from Earth, it is in the southern summer sky near the constellations
Sagittarius and Scorpius.

Even for a black hole it is dim - about a billion times
fainter than others of its "supermassive" type.

And this makes it something of a mystery.

"Currently it's not easy to see at all. But if the gas cloud suddenly feeds
it with much more mass, you might get fireworks. And with that, you can test all
sorts of theories," said Dr Meyer.

Black holes are thought to play a crucial role in the life cycles of
galaxies. They eat matter from their surroundings and blow matter back. This influences
how stars are formed, how the galaxy grows, and how it interacts with other
galaxies.

To get a sense of the typical feeding habits of Sagittarius A*, the Swift
team has been making regular observations since 2006. Every few days, their spacecraft turns toward the galaxy's core and takes a
17-minute-long exposure. To date, they have detected six strong flares where the black hole was 150
times brighter for a couple of hours. But these are mere flickers compared to
the jets that could erupt from G2 - the display could last for years. Exactly how dramatic it transpires to be depends on what's inside the
cloud.

If it is mostly hydrogen gas, the X-rays will glow for years to come as the
black hole slowly swallows it.

The cloud was spotted in 2011 hurtling towards the
galactic core

But there is another possibility - the cloud could be hiding an old star. In
which case, the big dinner date could be an anticlimax.

The black hole may slurp a little from the cloud while the star slips on by
at a safe distance, dense enough to escape its gravity.

"I would be delighted if Sagittarius A* suddenly became 10,000 times
brighter. However it's possible it will not react much - like a horse that won't
drink when led to water," said Jon Miller, of the University of Michigan.

"Will there be fireworks or not? We have to wait and see. There is no smoking
gun that can tell us yet," said Dr Meyer.

"But even if the odds are against it, you still have to look, because if you
do see something it could be spectacular."

Thursday, January 9, 2014

Global Climate Change: An Excellent Source of NASA Data

Welcome to the Global Climate Change data visualization tool on MY NASA DATA. This page links you to a powerful data viewer that will allow you to examine all of the key climate change indicators that have been identified on the Global Climate Change website. You will be able to view these indicators on the Live Access Server (LAS) that we’ve configured for you to view global and local data pertaining to these key areas of interest.
For all of the parameters below, after clicking on their respective links, please click on “Choose Dataset” button on the upper left hand side of the LAS page and then click on the cross directly to the left of the indicator that you’d like to view, for some browsers, the Choose Dataset dialogue box will automatically appear. We’ve provided a description of each parameter that is currently available. If you have and questions or issues with the LAS please email the MY NASA DATA support team.

Europe was inhabited by hunter-gatherers before migrations from the Middle East brought agriculture to the continent.

Newly released genome sequences from almost a dozen early human inhabitants of Europe suggest that the continent was once a melting pot in which brown-eyed farmers encountered blue-eyed hunter-gatherers.

Present-day Europeans, the latest work shows, trace their ancestry to three groups in various combinations: hunter-gatherers, some of them blue-eyed, who arrived from Africa more than 40,000 years ago; Middle Eastern farmers who migrated west much more recently; and a novel, more mysterious population whose range probably spanned northern Europe and Siberia.
That conclusion comes from the genomes of 8,000-year-old hunter-gatherers — one man from Luxembourg and seven individuals from Sweden — as well as the genome of a 7,500-year-old woman from Germany. The analysis, led by Johannes Krause of the University of Tübingen, Germany, and David Reich of Harvard Medical School in Boston, Massachusetts, was posted on the biology preprint website bioRxiv.org on 23 December 20131. The results have not yet been published in a peer-reviewed journal.

A second team, led by Carles Lalueza-Fox at the University Pompea Fabra in Barcelona, Spain, will soon publish the genome of a 7,000-year-old hunter-gatherer from northwest Spain, the palaeogenomicist said at a recent talk. In 2012, his team released preliminary genomic data from the same sample, suggesting that this hunter-gatherer bore little relationship to modern Spaniards2. The two papers describe what are thought to be the oldest human genomes from Europe found so far.

Milk or grain

The new studies sketch a portrait of early Europeans based on variations in DNA that are known to be linked to traits in modern humans.

The authors' gene sequencing suggests that the individuals from Luxembourg and Spain, although dark-skinned, probably had blue eyes and belonged to groups known to be hunter-gatherers. The German woman, meanwhile, had brown eyes and lighter skin, and was related to Middle Eastern groups known to have developed farming. However, both the Luxembourg hunter-gatherer and the German farmer had multiple copies of a gene that breaks down starches in saliva, a feature that has been proposed to be an adaptation to the grain-laden diets characteristic of agricultural life. On the other hand, neither of them had the ability to digest the sugar lactose, found in milk, a trait that emerged in the Middle East after the domestication of cattle and that later spread to Europe.

Related stories

The work also adds a few twists to the prehistory of Europe. Previous archaeological and genetic studies suggested that most of today’s Europeans are descended from Middle Eastern farmers who interbred with local hunter-gatherers in some regions and displaced these early residents in others. Krause’s team concludes that a third population contributed to the gene pool of contemporary Europeans.

This group, which the authors call ancient northern Eurasians, may have lived at high latitudes between Europe and Siberia until a few thousand years ago. Traces of this population were also detected in the genome of a 24,000 year-old Siberian child. Published online last month, the boy’s genome suggests that northern Eurasians interbred with the ancestors of Native Americans as well as with Europeans3.

Diverse migrations

A comparison of the new data with genetic sequencing of present-day individuals shows that the current residents of various European countries are composites of these three groups. Scots and Estonians, for instance, have more northern Eurasian ancestry than any other modern European population sampled, whereas Sardinians are more closely related to Eastern farmers than are other Europeans.

The new European ancient genomes also hint at early human forays out of Africa. Middle Eastern farmers, Krause’s team discovered, split off from African ancestors earlier than did European and Asian groups. One possible explanation for this pattern is that the farmers are descended from humans who inhabited 100,000–120,000-year-old settlements in Israel and the Arabian Peninsula.

Many researchers assumed that these sites represented failed migrations out of Africa, because other evidence suggested that humans left Africa less than 100,000 years ago.
“I don’t think anybody saw that coming,” says Eske Willerslev, a palaeogeneticist at the University of Copenhagen. But he says that the population’s existence will be tough to prove conclusively. “It’s super interesting — if it’s correct,” Willerlsev adds.

“What would be really nice is to find some individual that is a direct observation of this population,” says Pontus Skogland, an evolutionary geneticist at Uppsala University in Sweden. Ancient DNA does not last long in hot climates, and so finding DNA from this population in the Middle East may require technological advances, as well as a bit of luck.

Lalueza-Fox declined to discuss his team’s work, but he cautions against making too many assumptions about the peopling of Europe using just a handful of ancient genomes from a single time period. “It’s going to be lots of different migrations and movements,” he says. “There’s going to be plenty of room for investigation in the next few years.”

David Strumfels: I have been down on the SLS, because I thought companies like SpaceX were getting closer to heavy/manned launchers sooner and probably cheaper. A rocket like the close to testing Falcon Heavy will not have the capabilities of the SLS, but two might and still save over the SLS (in $$; logistics in multiple coordinated launches will make it more complicated, erasing the savings); furthermore, SpaceX can make the existing Falcon Heavy and increase their capacities considerably (as they have been doing with their Falcon series in general) on a modest budget and time scale.

This article makes me wobble on my stance, however. (It certainly makes me pant). Now that missions are being proposed for it, I start to see the advantages of having this kind of ultra-heavy lift capacity, and the plans increasing this capacity strengthen those advantages. Once it's ready, probably the Falcon series will probably never catch up; and I doubt the company has serious plans to do so. Now. So, I'm starting to become a rooter. Or many it's just "rocket-envy."

New Mission Concepts for SLS With Use of Large Upper Stage

Boeing recently released a proposal for a new upper
stage for the SLS Block I configuration that would enable the launch of even
heavier payloads to deep-space destinations. Image Credit: NASA

Boeing recently released a set of proposals for new missions beyond low-Earth
orbit for NASA’s Space Launch System, or SLS, utilising a newly designed Upper
Stage.

The SLS is NASA’s next generation heavy-lift launch vehicle (HLV), which is
on schedule to make its inaugural unmanned test flight in 2017. It will be used
for launching the Orion Multi-Purpose Crew Vehicle, or MPCV, the space agency’s
next manned spacecraft currently under construction, to deep-space destinations
such as the Moon and Mars—a capability that had been lost with the cancellation
of the Apollo lunar missions more than 40 years ago.

An artist rendering of the various configurations of
NASA’s Space Launch System. Image Caption/Credit: NASA

Although the first two launches of the SLS in 2017 and 2021 respectively are
designed to be test flights of the initial Block I configuration capable of
delivering 70 metric tons to low-Earth orbit (LEO), NASA is designing the heavy
launch system to be evolvable, with the final Block II configuration having a
payload capacity of 130 metric tons to LEO—rivaling the capability of the Saturn
V rocket that sent humans to the Moon during the 1960s and ’70s.

As reported at the NASA
Spaceflight.com website, Boeing, which is the prime contractor for designing
and building the SLS’s core and upper stages, recently presented its proposal
for a new Large Upper Stage, or LUS, for use on the SLS, which would enable new
missions to low-Earth orbit and beyond.

The currently designed upper stage for the Block I version of SLS is an
Interim Cryogenic Propulsion Stage (ICPS), also known as the Delta Cryogenic
Second Stage, or DCSS. This is the same upper stage used on the Delta IV rocket,
and it employs a single RL-10B2 engine developed by Pratt & Whitney.
Although the ICPS would boost a 70-metric-ton payload to LEO in the SLS Block I
version, Boeing’s proposal for the LUS would significantly advance this
capability to more than 90 metric tons, allowing for even more ambitious
deep-space missions. “A new 8.4m Large Upper Stage (LUS), as a follow on to the
interim Cryogenic Propulsion Stage (ICPS), can provide significant increases in
SLS payload injection capability,” notes the company in its presentation.

An alternate concept of placing a human outpost to the
Earth-Moon L2 point with the SLS is Skylab
II, proposed by a team of engineers working with the Advanced Concepts
Office at NASA’s Marshall Space Flight Center. Image Credit: NASA

“Someone reminded me that, up until the last two modules were put up on the
ISS (via Shuttle), Skylab had more crew volume,” says Jim Crocker, Vice
President and General Manager, Civil Space, Lockheed Martin Space Systems Co.
“Skylab was done with one Saturn V. Sometimes it requires re-thinking of what
you’re doing.”

These new mission concepts studied by Boeing fall into four main categories:
LEO destinations, cislunar and lunar missions, Mars and Outer Solar System
destinations.

One of the payloads that could take advantage of the SLS’s LEO payload
capability, according to the Boeing study, is Bigelow Aerospace’s proposed BA 2100 inflatable habitat.
Bigelow Aerospace is best known for its innovative and ambitious work on
designing and launching inflatable modules in orbit, based on NASA’s TransHab
technology. The private company has already launched two experimental modules in
low-Earth orbit, Genesis I and II in 2006 and 2007 respectively. It is already
planning to send a Bigelow Expandable Activity Module, or BEAM, on the
International Space Station in 2015 and plans to develop the first privately
built space station called Bigelow Commercial Space Station, constructed from
several BA 330 modules that are currently under development.

Yet the company revealed even more ambitious plans in the 2010 International
Symposium for Private and Commercial Spaceflight held in Las Cruces, N.M., with
the unveiling of the BA 2100, or Olympus module concept. As the number on its
name implies, BA 2100 would feature a living area of 2,100 cubic meters of
volume, completely dwarfing the smaller BA 330. This enormous habitat would have
a calculated mass of 70 to 100 metric tons, making the SLS the only heavy-lift
vehicle capable of placing it into orbit. That fact was acknowledged by Bigelow
Aerospace Vice President Jay Ingham as well. “If a super-heavy-lift launch
vehicle ever did exist, probably in the range of around 100 metric tons, would
require an 8-meter fairing to launch the BA-2100,” Ingham discussed during the
Symposium.

“SLS allows delivery of the BA-2100 via direct insertion to a low earth orbit
and is the only launch vehicle capable of delivering a payload this large to
LEO,” notes Boeing in its LUS concept presentation. Besides being used as a
space hotel complex or space science research laboratory in low-Earth orbit, the
BA 2100 could also be used as a large self-sufficient crew habitat for long
interplanetary missions to Mars or anywhere else in the Solar System.

The main purpose of the SLS is to enable beyond-Earth orbit human space
exploration. In 2011 Boeing proposed a design for a cislunar Exploration Gateway
Platform, located in the L1 or L2 Lagrangian points of the Earth-Moon system.
This mission concept envisioned the use of existing left-over hardware from the
ISS program for the construction of a cislunar manned outpost that would enable
regular access to cislunar space and the lunar surface itself. Many within the
space agency view this concept as the next logical step beyond low-Earth orbit
for human exploration, serving as a testing ground for long-duration missions
prior to a human trip to Mars. “Building a translunar outpost is an important
first step in retrieving an asteroid, returning to the moon or venturing to
Mars.
Using the SLS/LUS would allow the Exploration Platform to be constructed
and crewed in only two launches, as opposed to the four missions required using
SLS/ICPS (interim Cryogenic Propulsion Stage), thus saving cost and
significantly shortening the time required to start accruing the benefits of a
crewed Exploration Platform in translunar space,” notes the new Boeing
study.

Boeing’s proposal for an Exploration Gateway Platform,
at a Lagrangian point in the Earth-Moon system. Image Credit:
Boeing/NASA

A mockup of the proposed Bigelow BA 2100 inflatable
module. With a projected payload mass between 70 to 100 metric tons, the only
launch vehicle existing or under development that could place it to orbit is the
SLS. Image Credit: Bigelow Aerospace

Besides human space exploration, the LUS could greatly advance robotic
exploration as well, with its ability to directly send large interplanetary
spacecraft to their destinations to the outer Solar System, mitigating the need
for multiple gravity-assist manuevers that greatly prolong the mission duration
to many years or decades.

The Interim Cryogenic Propulsion Stage currently designed for the SLS Block I
version uses a 5m payload fairing that is capable of sending approximately 3
metric tons of payload to Jupiter, 1.8 tons to Saturn, and just 0.13 tons to
Uranus. The proposed LUS upper stage, featuring an 8.4m payload fairing, would
be capable of sending three to four times more massive payloads to these
destinations.
Depending on the LUS variant being used (having a single, dual, or
4-engine configuration), the new upper stage could directly send a payload of
approximately 8.5 metric tons to Jupiter, 6 tons to Saturn, and 2 tons to
Uranus. This payload capability would enable new and exciting missions to
Europa, Titan, Enceladus, and Uranus that just aren’t possible with existing
launch vehicles today.
“The SLS provides a critical heavy-lift launch capability
enabling diverse deep space missions,” states the Boeing report. “The added
payload to destination that can be provided by a new Large Upper Stage, would be
an enhancement for future science, astronomy, and Human spaceflight
missions.”

The 4-engine configuration of Boeing’s proposed Large
Upper Stage for the SLS. Image Credit: NASASpaceFlight.com

The new SLS/LUS lift capability would enable a Europa orbiter or lander to
reach the Jovian moon in approximately three years after launch, and a similar
mission could reach Titan in four. “Imagine the science return with SLS, where
we can get there within a few years, and how that can accelerate scientific
discovery,” says Crocker. “We don’t know what we’re going to find in science,
but we do know that if you find it sooner, you get a much higher science return
for your investment.”

A dedicated Uranus orbiter has also been the longing of the planetary science
community. A Uranus orbiter is listed as the third highest priority Flagship
mission after Mars and Europa in the 2013-2022 U.S. Planetary Science Decadal
Survey. Dr. Mark Hofstadter, a planetary scientist at NASA’s Jet Propulsion
Laboratory, stressed that point during a presentation
at the January 2013 meeting of the Outer Planets Assessment Group, in Atlanta,
Ga. “The Group is concerned that no action was taken on its findings last year
regarding a Uranus mission study,” notes Hofstadter in the presentation, “and
again urges that NASA initiate such a study responsive to Decadal Survey science
goals for the ice giants.”

Planetary exploration wouldn’t be the only field in space science that could
benefit from the use of the SLS/LUS concept. The new upper stage would also be
able to lift the proposed Advanced Technology Large Aperture Space Telescope, or
ATLAST, that is under
consideration by NASA. ATLAST is a next generation space telescope, featuring a
monolithic 8m primary mirror, four times bigger than the one on the Hubble Space
Telescope. An alternative design also calls for a 16m segmented primary mirror,
which could also fit inside the bigger LUS payload fairing. According to the
Space Telescope Science Institute’s project website, “ATLAST will have an
angular resolution that is 5 – 10 times better than the James Webb Space
Telescope and a sensitivity limit that is up to 2000 times better than the
Hubble Space Telescope … It is envisioned as a flagship mission of the 2025 –
2035 period, designed to address one of the most compelling questions of our
time. Is there life else where in our Galaxy? It will accomplish this by
detecting ‘biosignatures’ (such as molecular oxygen, ozone, water, and methane)
in the spectra of terrestrial exoplanets.”

Maybe the most important aspect of the LUS design is its cost-saving approach
to the SLS’s development. If chosen by NASA, the LUS would be constructed at the
agency’s Vertical Weld Center at Michoud Assembly Facility in New Orleans, where
Boeing alreadywill be
constructing the SLS’s Core Stage, without the need for extra welding or other
machining equipment, thus helping to further bring the SLS’s development costs
down.Video Credit: NASA/Marshall Space Flight Center

Since its inception in 2011, the SLS has been heavily criticised by many
within the space community for its perceived lack of missions. During her recent
appearance on a radio talk show,
former NASA Deputy Administrator Lori Garver heavily criticised SLS as being a
rocket to nowhere.“Where is it going to go?” she asked during the show. Although
Boeing’s LUS upper stage concept hasn’t been yet approved by NASA, it
nevertheless largely invalidates Garver’s criticism by showcasing that the space
agency’s newest heavy-lift vehicle could be used for all shorts of exciting and
ambitious human and robotic missions throughout the Solar System.

At first sight this seems the usual delightfully charming Hitchens saying the kind of things he was and is so famous and fondly remembered for. Only on second or third sight did I realize that, at least this time, the great man was in error. Completely in error.

For one thing, it is an unforgiveable insult to chimpanzees all over the world; worse, to the entire -- sans H. sapiens -- animal kingdom on this planet. If creating gods really is maladaptive, irrational, and just plain foolish, then we humans must be at the bottom of the evolutionary cesspool, hardly near the top. Yet Hitchens unfortunately implies exactly the opposite, as though creating supernatural beings somehow "reduces" our evolutionary pedestal down to the level of our clownish closest cousin. By half a chromosome, but that's a lot.

But then I re-wondered about it. How do we know that chimps haven't made, somewhere in their history, things akin to our gods? They certainly aren't capable of science, nowhere near to our level at least; so why shouldn't they believe in the supernatural? Lack of science is certainly most the reason almost all of our ancestors believed in gods and other supernatural phenomenon.

On the other hand, are chimps capable of belief, which involves some pretty sophisticated cognition skills? I don't know if (or how) there have been studies of this, or even conclusions drawn, but I will say this: don't bet your life's fortune that the answer is negative. So many animals, including the other apes, have been shown over the last 20-40 years to possess much more sophisticated behaviors and cognition skills than we'd ever suspected. New discoveries seem to happen almost daily. (Hence the animal rights movement.)

"Half a chromosome away from being chimpanzee" is probably at least a bad analogy; that little bit of DNA might not make as much difference as we suppose. It is certainly nothing to be either proud or ashamed of, and quite possibly has nothing to do with believing in gods or not.

Dark-matter hunters may need to check their calendars. The sun's gravity could change the time when dark matter signals are detected on Earth, which could help sharpen the search for the elusive substance.

Invisible dark matter is thought to make up most of the matter in the universe. Physicists hope to detect it in the form of weakly interacting massive particles (WIMPs) when they collide with ordinary matter in underground detectors.

Some have argued that the rate of such interactions should vary with the seasons, as Earth's orbit brings it ploughing through the cloud of dark matter suffusing the galaxy. When the planet heads into this "WIMP wind", around 1 June, we should see more dark matter strikes; in December, when Earth is moving downwind, we should see fewer.

Warp factor

Now, Benjamin Safdi of Princeton University and his colleagues note something that all experiments have neglected: the sun. As WIMPs stream through the solar system, the sun's gravity bends their trajectories, focusing the streaming particles on a particular location in Earth's orbit. This effect can shift the date of the maximum number of collisions by anything from a few days up to several months, depending on the WIMPs' mass and speed. "This force warps the dark matter 'wind' in a way that had not previously been noticed," Safdi says.

The fact that the date of maximum WIMP collisions should change depending on their energy could lend future searches a sharper scalpel to scrape true dark matter signals away from background noise, he adds.

"Our result gives dark matter direct-detection experiments an excellent way of distinguishing real interactions with the galactic dark matter halo from background," Safdi says. "It is hard to imagine a background source which could mimic this energy-dependent modulation."

Advertisement:Replay Ad

Punchline coming

The work does not explain DAMA's possible dark-matter signal, but re-analysing the data using the new approach could help support or refute their results, Safdi says.

"There is already a slight trend in the data consistent with our prediction for the gravitational focusing effect – that is, the date of the maximum moves further away from June 1 at lower energies," says team member Samuel Lee, also of Princeton. "One punchline of our study is that accounting for the gravitational focusing effect can perhaps rule out or confirm the dark-matter interpretation of the DAMA annual modulation."

Richard Gaitskell of Brown University in Providence, Rhode Island, who works on a direct-detection experiment in South Dakota called LUX, says that the new work could be important for helping design future experimental set-ups. "These researchers have clearly demonstrated just how potentially interesting data from a direct-detection experiment can be," he says.

In a Scientific American essay based on their new book A Grand Design, Stephen Hawking and Leonard Mlodinow are now claiming physicists may never find a theory of everything. Instead, they propose a "family of interconnected theories" might emerge, with each describing a certain reality under specific conditions.Most of the history of physics has been dominated by a realist approach. Scientists simply accepted that their observations could give direct information about an objective reality. In classical physics, such a view was easily defensible, but the emergence of quantum mechanics has shaken even the staunchest realist.

In a quantum world, particles don't have definite locations or even definite velocities until they've been observed. This is a far cry from Newton's world, and Hawking/Mlodinow argue that - in light of quantum mechanics - it doesn't matter what is actually real and what isn't, all that matters is what we experience as reality.

As an example, they talk about Neo from The Matrix. Even though Neo's world was virtual, as long as he didn't know it there was no reason for him to challenge the physical laws of that world. Similarly, they use the example of a goldfish in a curved bowl. The fish would experience a curvature of light as its reality and while it wouldn't be accurate to someone outside the bowl, to the fish it would be.

"In our view, there is no picture or theory-independent concept of reality. Instead we adopt a view that we call model - dependent realism: the idea that a physical theory or world is a model (generally of a mathematical nature) and a set of rules that connect the elements of the model to observations. According to model - dependent realism, it is pointless to ask whether a model is real, only whether it agrees with observation. If two models agree with observation, neither model can be considered more real than the other. A person can use whichever model is more convenient in the situation under consideration."

This view is a staunch reversal for Hawking, who 30 years ago argued that not only would physicists find a theory of everything, but that it would happen by the year 2000. In his first speech as Lucasian Chair at Cambridge titled "Is the end in sight for theoretical physics?," Hawking argued that the unification of quantum mechanics and general relativity into one theory was inevitable and that the coming age of computers would render physicists obsolete, if not physics itself.

Of course, Hawking has become rather well known for jumping way out on a limb with his public remarks and for decades he embraced supergravity as having the potential to solve theoretical physicist's ills, even hosting a major conference on it in 1982. However, but Hawking has never harbored allegiances to theories that describe a physical reality.

So, while two well-known physicists coming out against a theory of everything is compelling, it really shouldn't seem like anything new for Hawking.

"I take the positivist view point that a physical theory is just a mathematical model and that it is meaningless to ask whether it corresponds to reality. All that one can ask is that its predictions should be in agreement with observation."

Large-scale storage of renewable energy for use when the sun isn't shining and the wind isn't blowing. Energy from solar panels is shown stored in green and blue chemicals in Harvard flow battery storage tanks, powering this green city at …
A team of Harvard scientists and engineers has demonstrated a new type of battery that could fundamentally transform the way electricity is stored on the grid, making power from renewable energy sources such as wind and solar far more economical and reliable.
The novel battery technology is reported in a paper published in Nature on January 9. Under the OPEN 2012 program, the Harvard team received funding from the U.S. Department of Energy's Advanced Research Projects Agency–Energy (ARPA-E) to develop the innovative grid-scale battery and plans to work with ARPA-E to catalyze further technological and market breakthroughs over the next several years.

The paper reports a metal-free flow battery that relies on the electrochemistry of naturally abundant, inexpensive, small organic (carbon-based) molecules called quinones, which are similar to molecules that store energy in plants and animals.

The mismatch between the availability of intermittent wind or sunshine and the variability of demand is the biggest obstacle to getting a large fraction of our electricity from renewable sources. A cost-effective means of storing large amounts of electrical energy could solve this problem.

The battery was designed, built, and tested in the laboratory of Michael J. Aziz, Gene and Tracy Sykes Professor of Materials and Energy Technologies at the Harvard School of Engineering and Applied Sciences (SEAS). Roy G. Gordon, Thomas Dudley Cabot Professor of Chemistry and Professor of Materials Science, led the work on the synthesis and chemical screening of molecules. Alán Aspuru-Guzik, Professor of Chemistry and Chemical Biology, used his pioneering high-throughput molecular screening methods to calculate the properties of more than 10,000 quinone molecules in search of the best candidates for the battery.

Flow batteries store energy in chemical fluids contained in external tanks—as with fuel cells—instead of within the battery container itself. The two main components—the electrochemical conversion hardware through which the fluids are flowed (which sets the peak power capacity), and the chemical storage tanks (which set the energy capacity)—may be independently sized. Thus the amount of energy that can be stored is limited only by the size of the tanks. The design permits larger amounts of energy to be stored at lower cost than with traditional batteries.

The artist's concept depicts Kepler-62f, a super-Earth-size planet in the habitable zone of a star smaller and cooler than the sun, located about 1,200 light-years from Earth in the constellation Lyra. Credit: NASA/Ames/JPL-Caltech

Life on Earth-like planets can exist at least ten times farther away from their stars than previously thought, scientists found, putting in question our whole perspective on habitable zone distances.

A new paper published in the journal, Planetary and Space Science, describes how living organisms have just as much chance of surviving in areas below their uninhabitable planets’ surfaces.
This includes planets a staggering distance away from their stars, as well as even those that were recently discovered to be drifting in space by themselves, with no apparent host star. It is all about temperature.

The previous commonly accepted assertion was that the ‘Goldilocks’ zone was a requirement. It is the zone both far away and near to its star to provide the kind of climate capable of sustaining life, because it supports water which is neither boiling hot nor frozen.

Now a team of researchers from Aberdeen and St. Andrews universities has an updated view of things. PhD student Sean McMahon, author of the paper, says “that theory fails to take into account life that can exist beneath a planet's surface. As you get deeper …the temperature increases, and once you get down to a temperature where liquid water can exist – life can exist there too.”

To prove this, the scientists devised a computer model to cleverly approximate temperatures below the surfaces of planets by inputting the distance to their respective stars and crossing that with the
planet’s size.

Using that model they discovered that the radius around a star, capable of supporting life, increased three-fold if new data on depth at which life can exist below the surface of a given planet were taken into account. "The deepest known life on Earth is 5.3 km below the surface, but there may well be life even 10 km deep in places on Earth that haven't yet been drilled,” McMahon said.

What adds to the excitement is that the model allows for potentially expanding the habitable zone even more. If indeed we do find life 10km below the Earth’s surface, the math tells us that Earth-like planets could support life as far as 14 times the distance previously considered to be the Goldilocks zone.

To put this into perspective – our current habitable zone is considered to reach out as far as Mars. But new measurements that account for life existing under rocky surfaces take that radius as far as Jupiter and Saturn.

For example, the recently discovered Gliese 581 d could be a candidate. Sure, it is about 20 trillion kilometers away, but its cold surface could well hide life a couple of kilometers below the surface, scientists assume.

Scientists are excited at the subsurface theory on sustaining life. We can now widen our search for life, they hope, adding that the new findings are so radical that the fact of life on Earth (which itself is very different from the thousands of planets we know about) could itself be anomalous because life receives much more protection inside a warm, mineral-rich rock than risking survival on its inhospitable surface.

Life on Earth-like planets can exist at least ten times farther away from their stars than previously thought, scientists found, putting in question our whole perspective on habitable zone distances.

A new paper published in the journal, Planetary and Space Science, describes how living organisms have just as much chance of surviving in areas below their uninhabitable planets’ surfaces.
This includes planets a staggering distance away from their stars, as well as even those that were recently discovered to be drifting in space by themselves, with no apparent host star. It is all about temperature.

The previous commonly accepted assertion was that the ‘Goldilocks’ zone was a requirement. It is the zone both far away and near to its star to provide the kind of climate capable of sustaining life, because it supports water which is neither boiling hot nor frozen.

Now a team of researchers from Aberdeen and St. Andrews universities has an updated view of things. PhD student Sean McMahon, author of the paper, says “that theory fails to take into account life that can exist beneath a planet's surface. As you get deeper …the temperature increases, and once you get down to a temperature where liquid water can exist – life can exist there too.”

To prove this, the scientists devised a computer model to cleverly approximate temperatures below the surfaces of planets by inputting the distance to their respective stars and crossing that with the planet’s size.

Using that model they discovered that the radius around a star, capable of supporting life, increased three-fold if new data on depth at which life can exist below the surface of a given planet were taken into account. "The deepest known life on Earth is 5.3 km below the surface, but there may well be life even 10 km deep in places on Earth that haven't yet been drilled,” McMahon said.
What adds to the excitement is that the model allows for potentially expanding the habitable zone even more. If indeed we do find life 10km below the Earth’s surface, the math tells us that Earth-like planets could support life as far as 14 times the distance previously considered to be the Goldilocks zone.
To put this into perspective – our current habitable zone is considered to reach out as far as Mars. But new measurements that account for life existing under rocky surfaces take that radius as far as Jupiter and Saturn.

For example, the recently discovered Gliese 581 d could be a candidate. Sure, it is about 20 trillion kilometers away, but its cold surface could well hide life a couple of kilometers below the surface, scientists assume.

Scientists are excited at the subsurface theory on sustaining life. We can now widen our search for life, they hope, adding that the new findings are so radical that the fact of life on Earth (which itself is very different from the thousands of planets we know about) could itself be anomalous because life receives much more protection inside a warm, mineral-rich rock than risking survival on its inhospitable surface.

Tuesday, January 7, 2014

Many many years ago, I had a very good friend, someone I cared for deeply. She was intelligent, funny, very kind and helpful, almost always (it seemed) in a good mood. She did have one "flaw" however, although I use that word with compassion because it was the kind of flaw that is (alas) probably just part of human nature. The flaw was a serious lack of skeptical thinking. Why, I'm not certain. She was easily intelligent enough to have it. Perhaps it was her years as a member of the one true religion and acquisition of some position and responsibility in it that defeated her skepticism and left her a believer. (To be fair, I was in this religion for a number of years too, but it didn't defeat my natural skepticism and I escaped in time.)

So much for overtures, because I want to discuss a specific event between her and me. One fine day (all days in San Diego are fine days, until you get sick of it) she told me about her "theory" that the ancient Egyptians (and perhaps Mesopotamians) must have visited the Mayan and/or other Central American cultures. Reasoning? Both "ancient" cultures built large pyramids constructed of stone. That was it; she offered no other reasoning, no other evidence or logic, in the "theory's" support. She was probably as certain of it as she was of her religious truths.

If your mind is anything like mine's, and assuming you've heard this idea before, alarm bells were starting to clang in your head before you read this far. If you have a reasonable knowledge of history and geography (shame on you if you don't!) you can just sense that there is something(s) seriously wrong here, that the pieces of this puzzle surely can't hang together. To revive an old saying, "You can feel it in your bones."

That was precisely my experience, and I believe it is essential for our skeptical abilities to mature. Note that its main nutrient is knowledge, and not even very in-depth knowledge. When anyone tells us something that feels (to confirm, yes, I believe this usually starts as an act intuition) out of synch with our own ideas and knowledge, it can make us startle as though we'd been teleported to a different world or time. Of course, if your ideas and knowledge are incorrect, skepticism is pretty much in vain.

That's where it starts, I suggest, with that (often small) sense of dislocation, because it conflicts with at least something we know to be true. But if you end there, you would rightfully accused of just dismissing the person without argument. Furthermore, it would probably leave you with a funny feeling, as if you've failed yourself somehow. And you would be right here too. (Of course you can just make an agreeable grunt and change subject: as I think Shaw said, "Arguments are to be avoided. They are always vulgar and often persuasive.)

Furthermore, there is always a real possibility of you being wrong, or not having enough facts at hand. Or you can't summon all your defenses for the barrage of logical fallacies and cognitive biases about to assault you.

So I decided to file the issue away, to ruminate about it later when I was alone and could think clearly. When I did, the objections to her "theory" came swiftly and completely enough. I am not going to go over them (I am confident that you can find them yourselves quickly too).

So what happened to us, her historical speculations, and so forth? Between us nothing, for I knew better than to debate a firm believer in the one true religion -- I did say I cared very much for her, didn't I? Let sleeping cats sleep. But for me, it was an important triumph of my mind, a victory I have always carried with me, knowing I may need it any time. And don't doubt one thing: life has that much richer for it.

By Peter Gwynne at http://physicsbuzz.physicscentral.com/2013/12/physicists-and-archaeologists-tussle.html A confrontation among ancient and modern studies is pitting particle physicists seeking concrete evidence of dark matter against marine archaeologists intent on preserving material in centuries-old shipwrecks. The source of the issue: samples of lead used for anchors and ballast in Roman ships that were sunk up to 2,000 years ago and remain underwater since then.

The ancient lead's purity makes it invaluable today for shielding underground experiments designed to detect evidence of dark matter, the mysterious invisible stuff that, according to physicists, accounts for 85 percent of all the matter in the universe. But some marine archaeologists assert that, as a part of the world's cultural heritage, the lead should stay in place for detailed historical study.

"The use of these objects as stock for experimentation had never been an issue before," wrote Elena Perez-Alvaro, a doctoral candidate in underwater cultural heritage maritime law at England's University of Birmingham, in the university's journal Rosetta. "But now it is beginning to be deemed ethically questionable."

Both sides of the affair cite strong scientific justification for their use of the lead. "Underwater archaeologists and cultural heritage protection policymakers need to evaluate the value of this underwater lead for future generations," Perez-Alvaro explained. But the lead "is an essential element of state-of-the-art dark-matter searches," added Cambridge University physicist Fernando Gonzalez Zalba, who collaborates with Perez-Alvaro on studying the issue. "These experiments could shed light some of the most fundamental properties of the universe."

There's no shortage of the material. "I personally have seen dozens of lead anchor stocks during our expeditions in the Mediterranean and Aegean," recalled Brendan Foley of the Woods Hole Oceanographic Institution's Deep Submergence Laboratory, in Massachusetts.

For archaeologists, studying those stocks has value far beyond understanding ancient metallurgical methods. The pieces of lead "are marked with indicators of where they came from," said James Delgado, director of maritime heritage at the National Oceanic and Atmospheric Administration in the United States. "That helps us to reconstruct ancient economies and global trade."

Physicists have inferred the existence of dark matter by observing its gravitational influence in distant galaxies. But they don't know what it consists of. Among the most popular candidates are entities called weakly interactive massive particles, or WIMPs.

Theorists believe that, although WIMPs are about the size of atomic nuclei, they scarcely interact at all with any other forms of matter. "Very occasionally one of them will bump into a nucleus and rattle it around a bit," explained Daniel Bauer, project manager of the Cryogenic Dark Matter Search, or CDMS. "Our detectors are set up to measure the recoil of the nucleus when that happens," he added.

It doesn't happen often. "Nobody has yet had a completely confirmed sighting," Bauer said. Their detectors are sensitive to a rate of one incident per year.

Because the bumps happen so infrequently, CDMS has designed its experimental setup to minimize false positives. To avoid cosmic rays, the team has buried its detectors half a mile deep in a mine in Minnesota. It also shields them with copper, plastics, water, and, most important, lead.

"Lead is the material of excellence as a shielding material in radiation-rich environments," said Gonzalez Zalba, who does not work directly on dark-matter experiments. "Its low intrinsic radioactivity, good mechanical properties, and reasonable cost make it an excellent shielding material."

However, recently mined lead has one disadvantage. "Uranium and thorium that coexist with lead will leave a fair amount of the radioactive isotope lead-210 in it," Bauer noted. "In our experiments, even tiny amounts of radioactivity can lead to false signals. We want the purest possible material to shield the experiment from radioactivity."

That means lead mined a long time ago and preserved under water. "There's no chance that uranium and thorium are nearby," Bauer continued. "And since its decay half life is about 23 years, its radioactivity has basically gone." The ancient lead has over 1,000 times less radioactivity than modern lead.

The CDMS team bought its ancient lead from French company Lemer Pax, which had salvaged it from a Roman ship sunk off the coast of France. Later, the company "got in trouble with French customs for selling archaeological material," Perez-Alvaro reported.

"We assumed that this company was reputable, and I would believe that to be true," Bauer said. "They're still selling lead. That's the best evidence that everything is in order."

Another underground experiment, the Cryogenic Underground Observatory for Rare Events in Italy, also uses Roman lead. A museum gave it 120 archaeological lead bricks from a ship built more than 2,000 years ago and recovered in the early 1990s off the coast of Sardinia.

Marine archaeologists don't want to deny physicists the use of the ancient lead. But they fear that such use could help to commercialize the salvage of ancient shipwrecks.

"It's another example of something from a shipwreck that has value and will encourage an approach to shipwrecks that won't be available for careful meticulous study. Science and archaeology go out of the window in the quest for profits," Delgado said. "The issue is the salvaging and selling of the lead; that's where archaeologists say 'Wait a minute.'"

The 2001 UNESCO convention for the protection of the underwater cultural heritage preserves the Roman lead and other ancient artifacts from any use that would damage them. "However," Perez-Alvaro explained, "there is no reference anywhere to the use of shipwrecks for the purpose of experimentation – new uses of underwater cultural heritage."

Nevertheless, archaeologists and physicists see opportunities for agreements that would protect the ancient lead's heritage while still benefiting dark-matter searches. "It's all right if it's been documented – like taking a bit of DNA and putting it in the DNA bank," Delgado suggested. "That's a respectable scientific process that benefits all branches of science."

Gonzalez Zalba agreed. "We follow the idea of 'salvage for knowledge and not for the marketplace,'" he said. "Dark-matter searches follow under the idea of research for knowledge. Therefore I believe the resources should be granted if required under the adequate regulation and archaeological supervision."

Perez-Alvaro calls for a formal route to regulation. "There is a need for dialogue between the two fields," she said. "Especially there is a need for a protocol [on the acquisition and use of ancient lead] set up by archaeologists."

"Archaeologists will always view as unethical the outright sale of artifacts recovered from cultural sites," Foley added. "But other creative solutions could be devised which would be win-win for physicists and archaeologists."

Many people use the word "supernatural" without realizing that is an illogical oxymoron. We don't know all laws about our universe, so what does the term even supposed to mean? If a phenomenon can't be explained by existing science, then it is existing science that is inadequate, not nature in refusing to accommodate the phenomenon.

For me, the classic example is the end of the nineteenth century to the beginning of the twentieth.

Thanks to the work of brilliant scientists over 300 years -- Galileo, Kepler, Newton, Lavoisier, Gauss,Priestly, Faraday, Maxwell, others I've embarrassingly forgotten -- by the end of the 19'th century science seemed to be complete to many people.

Not that there weren't unsolved problems. The heat capacity of polyatomic gasses for one; modeling stable atoms with existing physics; the photoelectric effect; and the quandary of the "ultraviolet catastrophe" in black body radiation. Worse, try as they could, scientists there simply could not crack these nuts, could not make any progress, using the known (and complete) laws of physics.

Imagine that you lived at that time and are a believer in the supernatural. Why then, there's your answer! The problems couldn't be solved by natural science because they above and beyond science. They're the work of God, or some supernatural deity or ... or who knew what, but they must be beyond our comprehension. Forever. Bow down and say amen.

Fortunately for all of us, any science worth his PhD intuitively understands my first paragraph. They realized that if the "known" laws of nature couldn't, however much effort, solve some basic physical problems, then the laws were either in some kind error or there must be more laws than we had so far discovered.

I'm not going to take us through history of quantum mechanics and relativity. This is a blog, not a book, after all. I will say that so much of our technology -- the Internet, computers, other electronic devices, many medical devices, others I can't think of right now -- would not be in our lives. We would be living pretty much as people live 100 years ago.

Supernatural. The lethal superfallacy of so much of history. Let's rid ourselves of it as swiftly as possible.

(Phys.org) —Due to their rapid improvements in a short amount of time, perovskite solar cells have become one of today's most promising up-and-coming photovoltaic technologies. Currently, the record efficiency for a perovskite solar cell is 15% and expected to improve further. Although the perovskite material itself is relatively inexpensive, the best devices commonly use an expensive organic hole-conducting polymer, called spiro-OMeTAD, which has a commercial price that is more than 10 times that of gold and platinum.

In a new study, Jeffrey A. Christians, Raymond C. M. Fung, and Prashant V. Kamat from the University of Notre Dame in Indiana have found that copper iodide, an inexpensive inorganic hole-conducting material, may serve as a possible alternative to spiro-OMeTAD. Although the efficiency of perovskite solar cells containing copper iodide measured in this study is not quite as high as those containing spiro-OMeTAD, the copper iodide devices exhibit some other advantages that, overall, suggest that they could lead to the development of inexpensive, high-efficiency perovskite solar cells.

"The hole conductor is currently the most expensive part of perovskite solar cells," Christians told Phys.org. "Other organic hole conductor alternatives to spiro-OMeTAD have been investigated, but these alternatives still remain very expensive. This is the first reported inorganic hole conductor for perovskite solar cells, and is much less expensive than previously reported hole conductor materials.

This low-cost hole conductor could further lower the cost of these already inexpensive solar cells."
Perovskite solar cells, as a whole, are attractive because perovskite is a class of materials with a particular crystal structure that is the same as that of calcium titanium dioxide. This structure gives solar cells high charge-carrier mobilities and long diffusion lengths, allowing the photo-generated electrons and holes to travel long distances without energy loss. As a result, the electrons and holes can travel through thicker solar cells, which absorb more light and therefore generate more electricity than thin ones.

Although this study marks the first time that copper iodide has been investigated for use as hole conductors in perovskite solar cells, copper-based hole conductors have previously shown promise for use in dye-sensitized and quantum dot-sensitized solar cells. Part of their appeal is their high conductivity. In fact, copper iodide hole conductors exhibit an electrical conductivity that is two orders of magnitude higher than spiro-OMeTAD, which allows for a higher fill factor, which in turn determines the solar cell's maximum power.

Despite the copper iodide's high conductivity, the results of the current study showed that perovskite solar cells made with copper iodide hole conductors have a power conversion efficiency of 6.0%, lower than the 7.9% measured here for cells with spiro-OMeTAD hole conductors. The researchers attribute this shortcoming to the fact that spiro-OMeTAD solar cells have exceptionally high voltages. In the future, they think that the voltages of copper iodide solar cells can be increased, in particular by reducing the high recombination rate. The researchers calculated that, if they could achieve the highest parameter values observed in this study, the resulting copper iodide solar cell would have an efficiency of 8.3%.

The researchers also observed that the copper iodide solar cells exhibited another surprising advantage, which is good stability. After two hours of continuous illumination, the copper iodide cells showed no decrease in current, while the current of the spiro-OMeTAD cells decreased by about 10%. The researchers plan to further improve the devices in the future.

"We are currently working to understand the cause of the low voltage in copper iodide-based perovskite solar cells," Christians said. "With further work, we aim to increase the stability and improve the efficiency of these solar cells above 10%.

Republicans’ Medicare plan would be wide open to the same attacks the GOP is aiming at Obamacare.

For Republicans, Obamacare is the gift that keeps on giving. Each day brings a fresh batch of horror stories of people losing their plans, getting cut off from their doctors, and shelling out more for premiums.

But had Mitt Romney won in 2012 and let Paul Ryan have his way with Medicare, Republicans would be on the other side of the fence, trying to defend a health care overhaul that produced a nearly identical suite of horror stories.

That's because, despite the political chasm between them—and though neither will admit it—Obama and Ryan are pushing similar policies in the bid to change the U.S. health system. Both rely on private insurance, sold through a competitive exchange, with help from a government subsidy.
And though they apply it to different populations, both programs share a fundamental conceit: They move a big group of people into the private insurance market. Both Obama and Ryan argue their overhaul would improve the country as a whole, but neither can escape the reality that in a shift of that size, some people will lose out.

Some premiums will go up

Insurance companies cut back on coverage or limit provider networks to keep premiums low. Lower premiums also will usually come with higher deductibles. This is pretty much how private insurance works, and that will be the case whether Obama or Ryan is expanding the market for private insurance.

The Congressional Budget Office has said seniors' costs would be higher under Ryan's model, though it has declined to provide a specific estimate, in part because the plan hasn't been introduced as a bill.
A Ryan-like plan that immediately affected current seniors would raise seniors' premiums by an average of 30 percent, and their total spending—including premiums, deductibles, and other cost-sharing—by about 11 percent, according to CBO.

CBO's estimate isn't an exact comparison to the Ryan plan, because it assumes changes would affect current beneficiaries—which Ryan's plan wouldn't. But liberal health care experts pointed to the report as an indication of how the Medicare program would be different once a policy framework similar to Ryan's was fully in place.

The House Budget Committee, which Ryan chairs, did not respond to a request for comment for this story.

Some people can't keep their doctors

Republicans have assailed the Affordable Care Act because many of the plans offered through its exchanges use narrow networks of doctors, hospitals, and other health care providers. Conservatives sharply criticized the White House after Zeke Emanuel, a former health care adviser, said that if you like your doctor, you can pay more to keep your doctor.
But, again, the same basic trade-off applies under the Ryan Medicare plan. The Ryan plan guarantees that seniors will have a subsidy big enough to buy a health care plan. But in most parts of the country, it won't be enough to buy traditional Medicare.
So, in order to choose that program—and its extensive provider network—seniors would have to make up the difference out of their own pocket. They could pay more for the plan that exists today, or they could switch to a cheaper private plan that would likely offer a smaller provider network, meaning they might have to change doctors.
Premiums for traditional Medicare would cost seniors about 56 percent more than they pay today, under the accelerated scenario CBO analyzed. About half of Medicare beneficiaries would buy private plans and half would remain in traditional Medicare, under CBO's model.

Losers, but different losers

Obamacare and the Ryan plan are similar, but it's important to remember their respective starting points. Obamacare is primarily covering people who have never had insurance before, and also requiring some people (no one knows exactly how many, but it's somewhere in the millions) to buy new policies. Ryan, meanwhile, would overhaul an existing program.
"With Medicare, you're talking about the whole 40-plus million beneficiaries who are going to have to make new choices and whose benefits and premiums are likely to be affected," said Paul Van de Water, a senior fellow at the Center on Budget and Policy Priorities, which opposes Ryan's model for Medicare.
From a cost perspective, that means the Ryan plan has one especially big winner: the federal budget. The purpose of Ryan's plan is to cut federal entitlement spending, and it would do that. Overall costs, combining federal spending and seniors' costs, would also fall.
Obamacare launched a new stream of federal health care spending while the Ryan plan would shrink an existing one. That's a big difference. But both options would expand the market for private insurance, and therefore would expose millions more people to narrow networks and the other standard trade-offs of the insurance market. Both would inevitably mean some degree of sticker shock for certain people, and paying a lower price would mean giving up benefits.
"How it all works out is complicated, but that's another point of comparison with health reform," Van de Water said.

About Me

My formal training is in chemistry. I also read a great deal of physics and biology. In fact I very much enjoy reading in general, mostly science, but also some fiction and history. I also enjoy computer programming and writing. I like hiking and exploring nature. I also enjoy people; not too much in social settings, but one on one; also, people with interesting or "off-beat" minds draw me to them. I also have some interest in Buddhism.

These days I get a lot more information from the internet, primarily through Wiki. Some television, e. g., documentaries, PBS shows like "Nova" and "Nature".

My favorite science writers are Jacob Bronowski ("The Ascent of Man") and Richard Dawkins (his "The Blind Watchmaker" is right up there up Ascent). I also have a favorite writer on Buddhism, Pema Chodron. Favorite films are "Annie Hall" (by Woody Allen), "The Maltese Falcon", "One Flew Over The Cuckoo's Nest", "As Good As It Gets", "Conspiracy Theory", Monty Python's "Search For The Holy Grail" and "Life of Brian", and a few others which I can't think about at the moment.

I love a number of classical works (Beethoven's "Pastoral", "Afternoon Of A Fawn" and "Clair De Lune" by Debussey , Pachelbel's "Canon" come to mind. My favorite piece is probably Gershwin's "Rhapsody in Blue". But I also enjoy a great deal in modern music, including many jazz pieces, folk songs by people like Dylan, Simon and Garfunkel, a hodgepodge of pieces by Crosby, Stills, and Nash, Niel Young, and practically everything the Beatles wrote.

My life over the last few years has been in some disarray, but I am finally "getting it together.". As I am very much into the sciences and writing, I would like to move more in this direction. I also enjoy teaching. As for my political leanings, most people would probably describe as basically liberal, though not extremely so. My religious leanings are to the absolutely none: I've alluded to my interest in Buddhism, but again this is not any supernatural or scientifically untested aspect of it but in the way it provides a powerful philosophy and set of practical, day to day methods of dealing with myself and the other human beings.