A little about me

This is my attempt to make complex subjects accessible to an interested public - science, philosophy, neuroscience, physics - things of curiosity to someone who grew up wanting to be an astronaut and then...well just became engaged with life!
@lucidkevinor

Looking for something?

Archives

The astronauts of the International Space Station welcomed the arrival of what we call the “Bigelow Bungalow”, officially known as the Bigelow Expandable Activity Module (BEAM) on April 10.

If all goes to plan, the station’s robotic arm will install the module later this week. Although, according to NASA’s Kirk Shireman, it won’t be inflated until late in May. BEAM will then remain inflated for a period of two years.

Expandable Habitats and BEAM Installation Animation, NASA.

The arrival of the inflatable module is a significant achievement for the future of space habitation and exploration. It is a major achievement for private space enterprises, especially for Bigelow Aerospace, which built the module, and SpaceX, which delivered it.

It is also an extraordinary achievement for public-private partnership, the commercialisation of government-funded research and NASA’s strategy to stimulate the commercialisation of space.

And it shows that the perceived dichotomy of public and private in space is a false one. It certainly looks like the future of space exploration and exploitation lies in these cooperative ventures.

Why an inflatable habitat?

The idea of using inflatable habitats in space is not new. NASA’s first telecommunication satellite, Echo 1, was an inflatable Mylar balloon. And, in the early 1960s NASA developed the concept of an inflatable space habitat.

[…] unlike many early space station concepts, this design actually made it out of the concept phase and into production, though no models were ever flown.

NASA’s inflatable habitat prototype from 1961.NASA

Concepts for inflatable lunar bases were also drawn up, as with this one from 1989, replete with, “a small clean room, a fully equipped life sciences lab, a lunar lander, selenological work, hydroponic gardens, a wardroom, private crew quarters, dust-removing devices for lunar surface work and an airlock”.

Inflatable Moon Base. NASA concept art, 1989.NASA

In the 1990s, NASA developed the Transhab concept. Transhab was originally proposed as living quarters for a Mars mission, then developed as a possible crew quarters for the ISS.

However, the program was cancelled in 2000 due to budget constraints. The close resemblance between the Transhab concept and BEAM is no coincidence; Bigelow’s inflatables have evolved directly from Transhub.

Enter Robert T Bigelow, who made his money in the hotel industry, founded Bigelow Aerospace in 1998. Starting in 2002, NASA and Bigelow entered into the first of a series of Space Act Agreements with NASA. In 2003, Bigelow was licensed the NASA patents related to Transhab expandable habitats.

The company launched its first test modules, Genesis 1 and Genesis 2, aboard a Russian Dnepr rocket in 2006 and 2007. These modules are currently still in orbit and providing invaluable images, videos and data. They were the lowest-cost spacecraft fabrications and launches in aerospace history.

While astronauts aboard the ISS might wish to have the additional space of a new small bedroom approximately four metres long and 3.2 metres wide, the purpose of the installation is entirely experimental. So it will remain empty and uninhabited for the duration its deployment on the ISS.

The primary goal of BEAM’s two-year deployment is to test its safety for future occupation, including radiation protection and the various procedures for delivery and installation.

During the two-year test, it will be monitored for structural integrity, temperature control and resistance to micro-meteoroids and other sources of leaks.

NextSTEP: deep space exploration?

One of NASA’s current strategies “is to stimulate the commercial space industry” through Space Act Agreements and the more recent Next Space Technologies for Exploration Partnerships (NextSTEP) program.

Components for the ISS are currently limited in weight and volume. This is where the inflatable habitats provide a scalable future for extraterrestrial habitation, providing they pass safety and durability requirements. Their volume and shape will not be restricted by the launch capabilities of the available rockets.

David Parker Brown at Airline Reporter has some great photos of a walk through of a B330 mock-up here. The direct evolution from NASA’s Transhab is clearly apparent in the B330’s structure.

B330 clamshellBigelow Aerospace

On April 11, just a day after BEAM was delivered to the ISS, Bigelow Aerospace and United Launch Alliance announced a partnership to launch two B330 habitats. The plan is for the first module to be launched in 2019 and the second in 2020. The modules will provide the first commercial space habitat research facilities in orbit.

In the meantime, I’m sure many of us would settle for a visit to a “Bigelow Bungalow” in low-Earth orbit, where we could happily sip champagne from a tube and enjoy a not-so-very-private room with a view.

Recognise these planet names: Vulcan, Neptune, Pluto, Nemesis, Tyche and Planet X? They all have one thing in common: their existence was predicted to account for unexplained phenomena in our solar system.

While the predictions of Neptune and Pluto proved correct, Nemesis and Tyche probably don’t exist. Now we have another contender, Planet Nine – the existence of which astronomers predicted last month – but we may need to wait ten or more years for it to be confirmed.

Compare this to Vulcan. While many claimed to have observed the predicted planet, it took 75 years and Einstein’s general theory of relativity to consign it to the dustbin of history.

Somewhere out there

Astronomers are finding new exoplanets in other parts of the galaxy all the time. So why is it so hard to pin down exactly what is orbiting our own sun?

One reason is that very different methods are used to identify planets in other solar systems. Most involve observing periodic changes in the star’s light as the planet swings around it, as intercepted by telescopes such as Kepler.

Inside our own solar system, we can’t see these effects when we’re looking out into the darkness rather than towards the sun. Instead, planet-hunters use indirect means. Slight wobbles and perturbations in the orbits of planets, comets and other objects may reveal the gravitational presence of ghostly neighbours we didn’t know we had.

This method has been used often over the past two centuries to predict new planets.

The planet that arrived late

In 1843, French mathematician Urbain Le Verrier published his provisional theory on the planet Mercury’s orbital motion.

Three years in the writing, it would be tested during a transit of Mercury across the face of the sun in 1845. But predictions from Le Verrier’s theory failed to match the observations. Mercury was late by 16 seconds!

A photomosaic of images collected by Mariner 10 as it flew past Mercury. But was there another planet nearby?NASA

Le Verrier was not deterred. Further study showed that Mercury’s perihelion – the point when it’s closest to the sun – advances by a small amount each orbit, technically called perihelion precession.

But the amount predicted by classical mechanics differed from the observed value by a miniscule 43 arcseconds per century.

Initially, Le Verrier proposed that the excess precession could be explained by the presence of an asteroid belt inside the orbit of Mercury. Further calculations led him to prefer a small planet, which he named Vulcan after the Roman god of fire.

The search for Vulcan

It was a credible claim, as in 1845 Le Verrier had also successfully predicted the position of Neptune from perturbations of Uranus’s orbit. Now astronomers just had to find Vulcan.

As planet fever hit the popular press, professional and amateur astronomers reviewed solar photographs to see whether Vulcan transits had been mistaken as mere sunspots.

The first possible sighting came immediately. In 1859 Edmond Lescarbault, a country doctor and gentleman astronomer in France, claimed to have seen Vulcan transit across the sun.

Vulcan’s moment in the sun came to a head in 1869. Observations of solar transits in March and April and a solar eclipse in August failed to see the elusive planet.

Not everyone was ready to give up, though. At the Sydney Observatory, astronomer Henry Chamberlain Russell watched the sun for three days in March 1877, according to a report in Sydney’s Evening News, on Friday March 23, which said:

No sign of Vulcan appeared all through the 20th and 21st. But in watching for this planet several interesting observations were made of the sun’s spots.

The explanation for the missing seconds came from a completely different direction. After Einstein published his general theory of relativity in 1915, it was revealed that the discrepancy was caused by the sun’s distortion of spacetime.

Speculation about unsighted planets never entirely died down in the astronomical community, but decades passed without any major breakthroughs.

In the 1950s, though, the solar system potentially expanded to a distance 100,000 times further that Earth’s orbit. The Dutch astronomer Jan Hendrik Oort hypothesised the existence of a spherical distribution of icy bodies. The Oort Cloud is thought to be the source of long period comets, which have eccentric orbits and periods from 200 to many thousands of years.

In 1951 the Dutch-American astronomer Gerard Kuiper proposed that a similar belt of icy objects beyond Neptune’s orbit could account for short-period and short-lived comets. In 1992 astronomers David Jewitt and Jane Luu discovered the first of these Kuiper Belt Objects (KBO) – originally called “Smiley”, it is now catalogued more prosaically as 1992 QB1.

The most well-known KBOs are Eris, Sedna and the dwarf planet Pluto. After flying by Pluto on July 15, 2015, the New Horizons spacecraft is due to encounter KBO-2014 MU69 on January 1, 2019.

Speculation and measurement

Other predictions for new solar system objects came from looking at the terrestrial fossil record, rather than the skies.

On the basis of statistical analysis of mass extinctions, the America palaeontologists David Raup and Jack Sepkoski proposed in 1984 that they coincided with large-impact events. Independently, two teams of astronomers suggested that a dwarf star, later named Nemesis, passes through the solar system every 26 million years, flinging comets on a path to impact Earth.

Comets provide key evidence in these studies. Analysis of perturbations in comet orbits led astronomers to propose that a brown dwarf (bigger than a planet but smaller than a star) exists in the outer solar system. It is named Tyche, the good sister of Nemesis.

In 2003, the “Pluto killer” Michael Brown was part of a team that discovered what he called “the coldest most distant place known in the solar system”, which came to be known as Sedna. The discovery of this Kuiper belt object prompted further searches and much speculation as to its origin – particularly its strange orbit.

As more and more objects were identified in the Kuiper Belt, it was possible to observe orbital anomalies more precisely. The simplest way to explain them was another planet.

At its closest approach to Earth, the predicted Planet Nine will still be 200 astronomical units (au) away (about 30 billion kilometres). Compare this to Pluto’s orbit, which is an average of 39 au from the sun (5.8 billion kilometres). We don’t even know where Planet Nine is right now, if it exists at all.

But everything we learn about the dark outer regions contributes to the story of how our solar system evolved, and, more importantly, how it will change in the future.

Finding Pluto: the hunt for Planet X

Our solar system’s shadowy ninth (dwarf) planet was the subject of furious speculation and a frantic search for almost a century before it was finally discovered by Clyde Tombaugh in 1930. And remarkably, Pluto’s reality was deduced using a heady array of reasoning, observation and no small amount of imagination.

The 18th and 19th centuries were thick with astronomical discoveries; not least were the planets Uranus and Neptune. The latter, in particular, was predicted by comparing observed perturbations in the orbit of Uranus to what was expected. This suggested the gravitational influence of another nearby planet.

John Couch Adams and Urbain-Jean-Joseph Le Verrier calculated the orbit of Neptune by comparing these perturbations in Uranus’ orbit to those of the other seven known planets. Neptune was hence discovered in the predicted location in 1846.

Soon after this, French physicist Jacques Babinet proposed the existence of an even more distant planet, which he named Hyperion. Le Verrier wasn’t convinced, stating that there was “absolutely nothing by which one could determine the position of another planet, barring hypotheses in which imagination played too large a part”.

Despite that lack of evidence for perturbations in Neptune’s orbit, many predicted the existence of a ninth planet over the next 80 years. Frenchman Gabriel Dallet called it “Planet X” in 1892 and 1901, and the famed American astronomer William Henry Pickering proposed “Planet O” in 1908.

Comets, the law of vegetable growth and a conspiracy

In addition to the perturbations of known planets there were other hypotheses that foretold unknown bodies beyond Neptune.

In the 19th century, it was understood that many comets had highly elliptical orbits that swung past the outer planets at their farthest points from the sun. It was believed that these planets diverted the comets into their eccentric orbits.

Pluto is not only distant, but it’s small. That makes it very difficult to see from Earth.NASA

In 1879 the French astronomer Camille Flammarion predicted a planet with an orbit 24 times that of Earth’s based on comet measurements. Using the same method, George Forbes, professor of astronomy at Glasgow University, confidently announced in 1880 that “two planets exist beyond the orbit of Neptune, one about 100 times, the other about 300 times the distance of the earth from the sun”.

Depending on how the calculations were done, the results predicted anything from one to four planets.

Other predictions were based on what can be described as numerical curiosities or speculations. One of these was the now-discredited Bode’s law, a sort of Fibonacci sequence for planets. The American mathematician Benjamin Pierce was not a fan, claiming that “fractions which express the law of vegetable growth” were more accurate than Bode’s law.

As well as these earnest astronomers, the trans-Neptunian planet idea attracted cranks and visionaries. An interesting contribution came in 1875 from Count Oskar Reichenbach, who accused Le Verrier and Adams of conspiring to conceal the locations of two trans-Neptunian planets.

The early photographic searches

Theories and calculations were all well and good, but many hoped to actually see the hitherto invisible planet(s). From the late 1800s new powerful telescopes equipped with the latest dry-plate photographic technologies were employed to search for undiscovered planets.

Amateur astronomers such Isaac Roberts and William Edwards Wilson used the predictions of George Forbes to search the skies, taking many hundreds of photographic plates in the process. They found no lurking trans-Neptunian planets.

The professionals fared no better. Edward Charles Pickering, director of the Harvard Observatory and William’s brother, spent around ten years from 1900 searching using his own data and those of earlier astronomers such as Dallet, all to no avail.

Lowell’s approach

In 1906 a new approach was introduced by the veteran astronomer Percival Lowell. Although best known to us for his (mistaken) observations of canals on Mars, Lowell bought a new rigour to analysing the orbit of Uranus based on observational data from 1750 to 1903.

With these improved calculations, hope for a visual fix on the elusive planet was renewed. With the aid of the brothers Vesto and Earl Slipher, Lowell spend the rest of his life scanning photographic plates with a hand magnifier and finally with a Zeiss blink comparator.

In September 1919 William Pickering kicked off another search for “Planet O” based on deviations in Neptune’s orbit. Milton L Humason, from the Mount Wilson Observatory in California, started a search based on these new predictions as well as Lowell’s and Pickering’s 1909 predictions. This search again failed to find any new planets. Pickering continued to publish articles on hypothetical planets but by 1928 he had become discouraged.

Zeiss Blink comparator at Lowell Observatory used in the discovery of Pluto by Clyde Tombaugh in 1930nivium/Wikimedia, CC BY

This was grim, unglamorous work. Each plate was exposed for an hour or more, with Tombaugh adjusting the telescope precisely to keep pace with the slowly turning sky. Today a computer would make the comparisons, but in 1929 they were made by eye, manually flicking between two images. Stars would remain motionless while other bodies would seem to jump between views. Some images would have 40,000 stars, others up to 1 million.

Nearly a year had elapsed when, on February 18, 1930, two images fifteen times fainter than Neptune were found among 160,000 stars on the photographic plates. The discovery was confirmed by examining earlier images. On February 20 the planet was observed to be yellowish, rather than bluish like Neptune. The new planet had revealed its true colours at last.

Announcing a discovery

Slipher waited until March 13 to announce the discovery. This was both Lowell’s birthday and the anniversary date of the discovery of Uranus. The announcement set off a worldwide rush to observe and photograph the new planet.

Now that astronomers, amateur and professional alike, knew what they were looking for, it turned out that Pluto had been hiding in plain view. Re-examination of Humanson’s plates showed four images of Pluto from his 1919 survey, and there were many others.

On March 14, an Oxford librarian read the news to his 11-year old granddaughter Venetia Burney, who suggested the name Pluto. It was also suggested independently in a letter by William Henry Pickering.

To complete the circle, some of Clyde Tombaugh’s remains are in a canister attached to the New Horizons spacecraft.

Most people alive today would not remember a universe without Pluto. And from 2015, its patterned surface will enter our visual vocabulary of the planets. Once seen, it can never again be unseen. Planet X, welcome to our world.

The Jovian moons – named after Jupiter’s lovers by Simon Marius – have been a source of scientific speculation since Galileo trained his telescope on Jupiter in 1610, announcing his discovery in the Sidereal Messenger.

But the idea that Europa and other moons of Jupiter might harbour life is relatively new, as is the notion they might have hidden oceans beneath their icy surfaces. Indeed, these speculations demonstrate just how fast our conceptions of the solar system, and life, can change.

Speculative science, speculative fiction

A generation of space scientists and enthusiasts who grew up on Robert A. Heinlein’s “juveniles” will fondly remember Farmer in the Sky, written in 1950, when the Jovian moons were believed to be rocky, like our own Moon.

But in the late 1950s and continuing through the early 1970s, a growing body of telescopic data suggested that some of these moons, in particular Callisto, Ganymede and Europa, were covered in water ice. This speculation came from their high albedo, a measure of how much they light they reflect. With an albedo of 0.64, Europa is one of the most reflective bodies in the solar system.

Europa as seen by Voyager 2 during its close encounter in 1979.NASA/JPL

The early 1970s also saw the first speculation that some outer moons of the solar system, including Europa, might hide an ocean beneath their surfaces. It was initially suggested this might be due to radiative heating, although it was later proposed that the heat might come from tidal forces induced by Jupiter, especially because of the synchronous orbits of the three innermost Galilean moons: Io, Europa and Ganymede.

The 1979 Voyager fly-bys confirmed that Callisto, Europa and Ganymede moons were covered in ice and that Io was extremely volcanic. The best images of Europa were taken by Voyager 2 from a range of 204,400 kilometres, showing Europa to be “billiard ball” smooth.

Not too hot, not too cold…

Things took a turn following the discovery by Robert Ballard’s 1977 expedition of entire ecosystems thriving near hydrothermal vents in the deep ocean. These vents existed in the “midnight zone”, without sunlight and photosynthesis, and changed the way we thought about life.

The discovery of life around deep ocean vents, like this one, raised the exciting prospect of life existing under the ocean on Europa.P. Rona/NOAA

In 1980, scientists Gerald Feinberg and Robert Shapiro hypothesised that deep sea volcanism might support life on the Jovian moons. The Feinberg-Shapiro hypothesis is one of the major reasons for the current interest in Europa by astrobiologists.

In essence, it was proposed there might be a tidally heated habitable zone around giant planets, similar to the habitable, or “Goldilocks” zone around a star: where it’s not to hot, not to cold, and where liquid water and life can exist.

The idea of life on the Jovian moons was quickly picked up by science fiction writers. In Arthur C. Clarke’s 2010: Odyssey two (1982) and 2061: Odyssey three (1988), aliens transform Jupiter into a star kick-starting the evolution of life on Europa, transforming it into a tropical ocean world forbidden to humans.

In Bruce Sterling’s 1985 Nebula Award nominee, Schismatrix, Europa’s ocean is colonised by a group of genetically transformed post-human species.

Fire and ice

Europa and life were thus well and truly established in the minds of science fiction writers, planetary scientists, exobiologists and the public by the time NASA’s extraordinary Galileo mission began taking images of Europa in 1996.

This is the colour view of Europa from Galileo, taken in the 1990s, that shows the largest portion of the moon’s surface at the highest resolution.NASA/JPL

By the completion of its primary mission on December 7 1997, Galileo had made eleven encounters with Europa. Galileo’s extended mission became one of “fire and ice”: its twin foci were Io’s vulcanism and Europa’s icy oceans. The Europa fly-bys took the probe to within a few hundred kilometres of the moon’s surface.

These extensive observations of Europa by the Galileo mission were compelling evidence for a liquid water ocean some 100 to 200 kilometres thick on which “floats” an outer shell of ice. Magnetometer measurements indicate the ocean is free flowing and salty.

Galileo also provided spectacular views of the icy terrain: ridges, slip faults and “ice-bergs”, all adding to the picture of a surface only 10-100 million years old, which is young by the four to five billion year age of the solar system.

The spacecraft, nearly out of fuel after an extended mission, was deliberately crashed into Jupiter on 21 September 2003 to protect Europa from possible contamination.

Europa Report

The data Galileo collected are still revealing new important finds. There evidence of clay-like minerals on the surface, possibly from asteroid or meteorite collision, and signs of sea salt, discoloured by radiation, making up some of the dark patches observed by both Voyager and Galileo.

So the new mission, slated for a rendezvous with Europa in 2030, won’t involve a lander. And until we can send a probe into the icy depths of Europa’s sea, speculation about what might be lurking there, à la Sebastián Cordero’s Europa Report, will remain the domain of science fiction and scientists’ fantasy. Maybe one day, it will be science fact. Europa, here we come.

Although sometimes called “Newton’s chair” after its most famous holder, Sir Isaac was not the only brilliant mind, nor the most colourful individual, to occupy the post.

The Lucasian Chair was founded in 1663 at the bequest of Henry Lucas (1640-1648), who was a member of Parliament for Cambridge University. In his will, he provided “a yearly stipend and salarie for a professor […] of mathematicall sciences in the said Vniversitie” to “honor that greate body” and assist “that parte of learning which hitherto hath not bin provided for”.

The Lucasian Chair has been held by a fascinating procession of scientists, including

Physicist and mathematician Issac Newton (who held the chair from 1669 to 1702)

It also has the unusual distinction of having been held by a famous – though fictitious and wholly artificial person – Star Trek: The Next Generation’s Data, in the series’ final episode, “All Good Things…”. But that is another quantum timeline.

Smart seat

The first Lucasian Professor, Isaac Barrow, held both the Regius Professorship of Greek and Gresham Chair in geometry.

Sadly, Barrow’s early ardour for mathematics had waned by the time he took up the Chair in 1663. His “method of tangents”, though, was seen as ground breaking at the time. This proto-calculus set the scene for his brilliant successor: Isaac Newton.

Newton was elected to the Chair after his anni mirabiles of 1666. According to William Stukeley’s 1752 biography, that is the year Newton inferred the law of gravity by observing an apple falling in his orchard as he “sat in contemplative mood”.

At the time of Newton’s election in 1669, the Lucasian Chair was one of eight Chairs at Cambridge. The Lucasian Professor is elected, then as now. The election is made by the masters of the Colleges at Cambridge, with the vice chancellor able to break a deadlock if required.

An uneven history

Despite its prestige, the history of the Chair is not one of undiluted greatness.

The stories of the post-Newtonian Chairs of William Whiston (from 1702 to 1710), Nicholas Saunderson (1711 to 1739), John Colson (1739 to 1760), Edward Waring (1760 to 1798) and Isaac Milner (1798 to 1820) was largely one of translating, teaching, expanding and developing the great works of former Chair-holder, Newton.

In the latter half of the 19th century, as science became the arena of professional scientists rather than dilettante gentlemen, the Lucasian Chair was sometimes used as a stepping stone to more lucrative or important positions.

Robert Woodhouse (Chair from 1820 to 1822) lasted only two years in the post. He was rewarded for his “conformity” by securing the Plumian Chair of mathematics and the directorship of the Cambridge astronomical observatory.

His successor, Thomas Turton (from 1822 to 1826), described as “mathematically inert and utterly reliable”, departed to the more prestigious Regius Chair of Divinity (founded in 1540 by Henry VIII) and better paid dean-ships, eventually becoming the Bishop of Ely.

Dirac and the quantum age

Nevertheless, while the term might not apply to all holders of the Chair, Paul Dirac (from 1932 to 1969), was indisputably brilliant. In fact, Dirac personified the stereotype of the lone genius.

Paul Dirac was one of the more brilliant Lucasian Professors. He predicted the existence of antimatter before it was first detected.Nobel Foundation

Einstein said of him: “This balancing on the dizzying path between genius and madness is awful.”

By the age of 26, Dirac had, in the period from 1925 to 1928, developed his own theory of quantum mechanics and relativistic quantum theory of the electron, as well as predicted the existence of antimatter.

Hawking: the stopgap professor?

Of the more recent holders of the Lucasian Chair, it is the name of Stephen Hawking, who held the Professorship for three decades from 1979 to 2009, that has become most synonymous with the post – and a household name at that.

Nonetheless he confounded his doctors and held the chair until the retirement age of 67.

Hawking had, at the time of his election, hoped the Chair might go to a brilliant scientist who was not already affiliated with or educated at Cambridge. This would have been a remarkable change.

Holders of the Lucasian Chair have all been Cambridge graduates, in addition to being male and British. Only Dirac and Hawking have undergraduate degrees from a university other than Cambridge (Bristol and Oxford, respectively). Dirac alone was not of British birth – he was a Swiss national, though born in England in 1902 and acquiring British nationality in 1919.

The quality of Hawking’s scientific output puts this “stopgap professor” in the Lucasian top-three league, along with Newton and Dirac.

Incidentally, Stephen Hawking played a game of poker with Star Trek’s Data – the fictitious future Lucasian Chair – along with fellow Chair Isaac Newton and Albert Einstein (the latter played by actors, of course) in Star Trek: the Next Generation’s episode “Descent”.

Hawking was succeeded by Michael Green, who was Lucasian Professor from 2009 to this year. Green made long-term contributions to mathematics, including pioneering string theory in 1984.

His models capture the essential physics without including all the, at times confounding, chemical detail.

Prior to his election as Lucasian Professor, Cates held a Royal Society Research Professorship at Edinburgh. At age 54, he will likely hold the Chair for more than a decade. It will be fascinating to see what he contributes to mathematics and the ongoing Lucasian history during his tenure.

As for future chairs? If Star Trek is any indication, it will continue to be populated by some of the most brilliant minds in the known universe – although one wonders when it might be finally held by a brilliant woman.

Introduction

Here I will be arguing that comets were not instrumental in the emergence of modern astronomy in the 17th century. This view, most notably propounded by Kuhn and Hellman , where observations of comets were of paramount importance in ushering in a post-Newtonian modern astronomy by the end of the 17th century. Heidarzadeh posits that the history of comets falls into four periods, the first two being most relevant to this discussion: from Aristotle to Brahe comets were assumed to be meteorological phenomena; from Brahe to Newton, comets were admitted as celestial bodies but with unknown trajectories; from Newton to Laplace, they were treated as members of the solar system; and the post-Laplacian period, in which the mass and density of comets was calculated to be much less than the planets.

The focus here is particularly on observations of the 1577 comet by Tycho Brahe and Michael Mästlin and the 1607 comet by Johannes Kepler and Edmond Halley. From their observations and subsequent explanations it is obvious that there was more than science involved in their arguments: their actions cannot be removed from the political, social and cultural context of their time. In agreement with Nouhuys and Schechner Genuth I find that in the contemporary 16th and 17th century view there was no distinction between magical and scientific ideas, and to assess the science without appreciating this leads to a misleading understanding of the role played by comets in this period. In Kepler’s extensive writing we also find concurrence; he found comets to be unimportant in his arguments for a new cosmology. In addition, the shared discourse of comet lore, into the 18th century, was for comets still seen as agents of upheaval, renewal and divine justice.

A tour of the universe in 1577

By modern astronomy, I mean a world-view where the solar system is heliocentric, with planets and comets for example, having elliptical orbits, described by Kepler’s laws and Newtonian mechanics. In the late 16th – early 17th centuries though, there were a variety of opinions advanced regarding the universe: the shaping of ‘scientific’ ideas is marked by the adaptability of the Renaissance Aristotelians, the role played by humanism, as well as astrology and divinatory beliefs.

Aristotle (384-322 BC) presented a long lasting model of the universe. The key idea, for our purposes, was it comprised a terrestrial and celestial tier. The lower terrestrial tier, from the moon down to the centre of the earth, was composed of four elements: earth, water, air and fire. The celestial tier, containing the planets (which included the moon and sun) and the stars, was made of a fifth, non-material, element; aither. The upper tier was unchanging. The stars were fixed to an outer, perfect, sphere revolving around the centre of the universe, the earth. Likewise the planets were fixed to interlocking spheres that transported motion from planet to planet. A key idea here, and somewhat challenging to our modern sense of material and immaterial, is that these non-material spheres were hard and impenetrable, ‘adamantine’, solid enough to keep the planets fixed in their motions and ensure that the spheres did not overlap. The terrestrial tier was the home of all change and corruption. Here was where Aristotle placed comets – they were sub-lunar phenomena composed of hot dry exhalations. They are referred to in his text on “Meteorology”, not “On the Heavens”.

This, highly philosophical model of Aristotle, was complemented by the mathematical model of Claudius Ptolemy (c.100-c.178 AD). Here Ptolemy successfully incorporated 800 years of observational data into a geometric model based on Aristotle’s. By incorporating epicycles and equants Ptolemy successfully predict planetary motions. Ptolemy accepted Aristotle’s aither, but not his interlocking spheres that transmitted motion from the stars to each successive planet, finishing before the terrestrial realm. Along with its predictive accuracy another of its successes is its simplicity from the point of view of the earth-stationed observer .

The challenge to the geocentric model of Aristotle and Ptolemy by Nicolaus Copernicus (1473-1543) in his 1543 work De revolutionibus orbium coelestium is well documented. The fundamental objection that Copernicus had to geocentrism was directed against both Aristotle and Ptolemy. He found it strange that motion was imparted to the ‘fixed’ stars to make the planets move. Copernicus argued it is better to have the Earth in motion and to reduce the complexity of the epicycles, and remove the equants needed to describe the geocentric system. This argument was one contemporary to Aristotle – it was reworking a Stoic criticism. However comets did not appear in Copernicus’ universe. However they did make an appearance in 1577.

Brahe, Mästlin and the comet of 1577

On the afternoon of November 13, 1577 Tycho Brahe (1546-1601), Danish royal consultant on astronomical and astrological matters, noticed a bright star. When a long ruddy tail, stretching in the opposite direction from the sunset, grew visible Brahe realised it was a comet – at age 31, it was the first he had seen. For the next two and a half months he observed and recorded its position against the fixed stars. From these parallax measurements, Brahe concluded that the comet was about one third of the way from the earth to the stars. This was the first major comet post Galileo’s telescopic discovery of the moons of Jupiter. The parallax measurements, like Galileo’s moons, challenged the logicality of immutable heavens, both Aristotle’s celestial sphere and the existence of crystalline spheres supporting the planets, rather than the geocentric model. Brahe had been skeptical of Aristotle’s distinction between the celestial and terrestrial regions, a subject he had delivered a lecture series on in 1574-5. In addition he had witnessed a nova, the ‘new star’ of 1572, which revealed the heavens to be mutable.

In 1578 Brahe wrote a brief manuscript on the 1577 comet in German. The 1577 comet had also stimulated him to develop a new model of the planetary system with a stationary Earth still as the centre of the universe, around which the sun and moon revolved, with the planets revolving around the sun. This model was developed by 1583 and published by Brahe in 1588 in De mundi aetherei recentioribus phaenomenis liber secundus. While the rejection of the crystalline spheres and the imperfection of the celestial region led Brahe to examine the Copernican model, they were not sufficient to overthrow the ancient models.

Politics and astrology were inextricably linked with the 1577 astronomical observations for Brahe. Within five weeks of the comet sighting a pamphlet had been produced by Jørgen Dybvad, a professor at the University of Copenhagen. He had first sighted the comet on November 11, 1577, whilst in the company of King Frederick of Denmark. His pamphlet, written for a public audience in the vernacular German and dedicated to the King, can be seen more as a political document than a scientific one. In it he delivered an apocalyptic message based on the “terrible great comet” including “2000 years of astrological and historical evidence” to bolster his assertions. Brahe’s pamphlet response was then as much to assert his own political credibility and influence with the King, as to develop his scientific endeavours. In his pamphlet on the comet Brahe states:

It is regrettable that this comet, no less than former ones, brings and arouses the same evil effects and misfortunes here on earth, so much the more so because this comet has grown so very much greater than others and has a saturnine, evil appearance, which was revealed by its pallid appearance and unclearly shining color like the star Saturn.

Brahe then went onto presage, for example, calamities for Europe west of Denmark, as the comet had “first let itself be seen with the setting of the sun.” The arena was more about astrology and royal influence than modern scientific debate.

Michael Mästlin (1550-1631), astronomer at the University of Tübingen and teacher of Johannes Kepler (1571-1630), also observed the 1577 comet. Voelkel claims “Mästlin was the only convinced Copernican teaching at a university in Europe when Kepler was a student.” Mästlin was another who concluded the comet being located beyond the moon. Mästlin’s most important contribution was subtle; he attempted to compute the comet’s orbit. With only limited orbital information available he was genuinely innovative in trying to find an orbit for this transitory phenomenon: assigning a circular heliocentric orbit, between that of Earth and Venus, for the comet. Mästlin used, in his published demonstratio, the Copernican model as a calculation tool, without directly supporting the model. Mästlin’s orbital results were overshadowed by Brahe’s study; however he greatly influenced his student, Kepler.

In addition to these astronomical innovations, the observations also changed the nature of augury. However, new models and instruments did not change comets remaining as portends, this despite the growing interest in physical models of comets. Mästlin, along with others including Brahe, proposed that the comet tales were produced as an optical effect caused by sunbeams shining through translucent spheres. In addition to Brahe’s politically motivated divinations others such as John Bainbridge, future Savilian professor of astronomy at Oxford, were still seeing evidence of divine providence in the comet of 1618, this comet lore starting to decline in learned circles by the end of the 17th century.

Kepler, Newton and Halley and the comet of 1607

Kepler first acknowledges his faith in the Copernican model arose from the studies of the 1577 comet by Mästlin. Kepler’s 1593 student disputation argued for the new order of the inferior planets and the dispensability of Aristotle’s adamantine spheres. This can be seen as a preference rather than necessity, as Kepler was also the first to demonstrate the geometrical equivalence of the Ptolemaic and Copernican models. Most importantly for my argument by 1604 Kepler had downplayed the influence of the observations of the 1577 comet and had decided that all comets had rectilinear paths. This is despite publishing the idea of elliptical orbits for the planets in his 1605 Astronomia Nova. By the 1621 second edition of his Cosmographic Mystery Kepler had decided that the comet had entirely outlived its original usefulness as an argument in support of his new cosmology.

In 1619 Kepler published his book De Cometis Libelli Tres which included a diagram of the 1607 comet, clearly identifying the comets path as rectilinear. In 1695 Edmond Halley used this and observation data from John Flamsteed (1646-1719), the first Astronomer Royal, to compute the path for the comet of 1680-81, and conclude that it was the same as the comets of 1607 and 1531. Working with Isaac Newton they found that the comets travelled in closed elliptical orbits, following Kepler’s descriptions and the soon to be published universal laws of Newton’s Principia. Furthermore Halley predicted the comet to reappear in 1758.

While this appears straight forward from the perspective of ‘progress’ in physics it was neither necessary nor sufficient for the acceptance of the Copernican model. Ariew has argued that the Aristotelian theory of comets, suitably modified, survived in text books and university theses well into the second half of the 17th century. For example by 1657 Aristotelians accepted comets as celestial objects, as stars not a fire, and therefore not sublunary; therefore not challenging the idea of a firmament, some adopting a view along the lines of a Tychonic or semi-Tychonic system on account of comets. Kepler, as argued, saw the comet as extraneous to his model, and both Newton and Halley “continued to see comets as harbingers of cataclysmic events and world reform.” Newton noted that the path of comets made them perfect to distribute “vital material” throughout the heavens as well collide with the sun or alter the solar system, Halley thought it possible that a comet caused the biblical flood.

Conclusion

I have focused particularly on observations of the 1577 and 1607 comets because their influence is most cited as being instrumental in the emergence of modern astronomy in the 17th century. What I have argued is that there was as much political and social reasoning behind their observation and explanation as there was science. Kepler, who put Copernicus’ model together with Brahe’s observations to create his “new astronomy’, did not see the place for comets in this model; consigning them to traveling linear paths rather than the elliptical orbits of the planets. Finally I have shown , into the 18th century, the shared discourse of comet lore was for them to be, in addition to being supra-lunary objects, agents of upheaval, renewal and divine justice.

Introduction

In 1859 Charles Darwin published his now famous On the Origin of Species , which provided, a well researched and reasoned naturalistic explanation of species evolution through ‘natural selection’. In introducing natural selection did Darwin enable us to dispense totally with teleological explanations for purpose and design in biology? In this essay I will contend that he did not. He did remove from modern, informed discourse the presence of a ‘designer’. Still leaving the argument for teleology as a useful explanatory concept in the biological sciences. This result produces a conundrum, especially if teleology is granted ontological existence rather than just metaphorical, suggesting that biology is not reducible to chemistry and to physics. While fascinating in itself, that enticing discussion is not in the scope of this essay.

I will be approaching this essay by following Lennox’s argument for the necessity of selection-based teleology and its origin in Darwin’s writings. I will present counter arguments from Dawkins, Nagel and then Davies’s strong argument that teleology as a metaphor is not a necessity, but rather it is a conservative psychological clinging to metaphorical explanation. I will then provide the argument from Bedau that teleological metaphor does have a heuristic role in biological discussions, and as example look at how it can give rise to ‘value’ in biological organisms. Via Aristotle’s final causality, I will be arguing that the extension of this teleological argument, that humans are purposeful animals, holds also for human culture. I will develop the argument that teleology can be a substantive argument for both purpose and value in human cultural evolution. Establishing the conclusion that culture does not transcend nor contrast with nature in a modern sense of evolution by natural selection.

The roots of teleology and Darwin

In general, and for the purposes of this essay, a teleological explanation is one in which some property is said to exist, or some process is said to be taking place for the sake of a certain result, or consequence . Teleological thinking originates from three views, all having their roots in ancient Greece . First, is what Lennox calls the ‘unnatural teleology’ of Plato in the Timaeus and Laws. In this case explanation of natural phenomena is an artefact of a divine, supernatural, intelligent being. Secondly are the ‘natural teleological explanations’ from the Aristotelian view (motions of natural objects are explained by their intrinsic purpose, unless they were not subject to external interference) that were discredited by Galileo and Newton in all the natural sciences bar biology. Final is the anti-teleology view of the Greek atomists.

The story gets more complicated, particularly in the early modern period, after the unnatural ‘intelligent design’ model of Plato is melded into Christianity, and then the medieval commentators have added to Aristotle. Later Rene Descartes, Francis Bacon and Baruch Spinoza all argue against the legitimacy of teleology, but on different and contradictory grounds. In the seventeenth century Robert Boyle and John Ray developed a Christian version of teleology based on the Platonic ‘unnatural teleology’, which came to be known as natural theology. Charles Darwin, along with many naturalists of this era, studied this , in his time at Cambridge University, in the form of the Natural Theology writings of William Paley .

Lennox argues that Darwin’s explanations in his writings are teleological . In Darwin’s 1868 monograph, The variation of animals and plants under domestication, Lennox quotes Darwin providing teleological explanations, without theological backing, for variation in plants being “accidental but for the promotion of the organisms’ wellbeing.” Lennox argues convincingly that there is a value component to Darwin’s explanations, “those traits which provide a relative advantage […] to the organisms that have them are selectively favoured.”

Teleological explanation in biology today

In agreement with Lennox other authors have proposed that teleology, in one form or another, is indispensable to biology . To understand the complex morphological and behavioural traits of organisms it seems we must say what the traits are for, which is to give a teleological explanation of why organisms have them. Some have argued that this is not necessary. Dawkins for example argues that natural selection provides a statistical explanation for why organisms have the traits they do, obviating any need for teleological explanations. Dawkins argument is is in line with the contention made by Nagel that being controlled by a program, having a function, is not the same as being goal-directed.

Dawkins, however, was more contending the teleological argument from natural theology, as he provides a stark teleological explanation for his main theme, the selfish gene: “I shall argue that the fundamental unit of selection, and therefore of self-interest, is not the species, not the group, nor even, strictly, the individual. It is the gene, the unit of heredity.” Here Dawkins sees the genes as following a program, rather than being metaphorically ‘selfish’, as his hypothesis boldly states.

Dawkins does apply ‘selfish’ continually as a metaphor, furthermore giving them, what could be interpreted as, an Aristotelian final cause – the purpose of genes is to be “selfish replicators”. Other philosophers maintain that this power of purpose should be dismissed as a consequence of our psychology .

Davies argues strongly that the architecture of our minds are such that we see ‘purpose’ and conceptualise certain objects as minded agents . This ability may have had (and may still have) a selective advantage for our ancestors, however it presents us with an abundance of false positives. “We readily see – we cannot help but see – minded agents or telltale effects of minded agents at nearly every turn, even when none is present.” Davies has noted that Darwin did kill design by a deity; “the theory of evolution by natural selection explains the diversity and adaptiveness of living forms better than any form of theology.” Davies however rails against the “conservatism” and “stubborn insistence” that gives rise to the seemingly indispensable persistence of metaphor in the concept of design in modern biology . He instead proposes that we can formulate a concept of biological functions without the teleological metaphor of design .

However it would seem that there is a heuristic value in metaphor use – providing us with things in a manner in which we otherwise might not see.

Bedau argues that “sanitizing” teleology by assimilating it into some “uncontroversial” descriptive form of explanation, as Davies demands above, misses the essential role that teleology does play in biology. Bedau proposes that value plays some role in the analysis of teleology and furthermore this can be usefully distinguished into three grades of evaluative involvement in teleology. Most arguments have focussed on what he terms grade one, the good consequences approach, which has many limitations and counter arguments. He argues that value plays a role in grade two, and in particular grade three, explanations, and in grade three the role is an essential part of the explanation. Grade three explanations, Bedau, argues are defined as a pair of logically linked propositions; linking a means to an end to another proposition which states the goodness of that end. This has great explanatory power for mental agents , such as human behaviours, artefacts (a rock is sometimes used as a paperweight, and a carburettor is designed to mix air and petrol) , and selection processes , such as evolution by natural selection.

Final causality and teleology revisited

Having seen the explanatory usefulness of teleology in modern biology it is worth revisiting Aristotle, albeit briefly, to see whether his ideas can add anything to this post-Darwin debate on teleology. This is particularly relevant to man , as a mental agent, and our self-perception that we are goal-directed individuals.

Aristotle’s telos is not a purpose or plan, nor is it a cosmic telos . Aristotle was primarily interested in individual living things and his ‘final causality’ is the mode of causation characterising human actions. At the same time Aristotle also finds teleological causation at work in nature in living organisms. I will focus on human actions now, not because, as Francis Bacon argued that final causes are of value only in the study of human affairs, not in the study of nature; where they are “barren virgins”, rather acknowledging that Aristotle forms a vast topic beyond the scope of this essay. I do suggest, as does Gotthelf for example, that Aristotle is worth revisiting to illuminate the implications of biological teleology for human life.

A key concept that arises from a study of Aristotle is that living organisms have an inherent telos and good of their own. Millett interprets this as introducing value into the world. Furthermore this imposes a moral obligation of responsibility on moral agents. Accepting and exercising such responsibility, Millett maintains, is a virtue. This argument in my mind provides a natural platform linking teleology of all biological entities to a natural ethics. I don’t claim that this is either new or uncontroversial. Rather I am suggesting goal-directedness in man is, and should be seen as, natural.

Is culture purposeful? Is it natural?

Humans are organisms that have evolved by natural selection, as part of an evolutionary tree that extends back to the beginnings of life on earth. It must be appropriate then to view humans as we view all other living organisms. I can plausibly apply teleological explanations, even if only as a metaphor, for their development, and in addition plausibly argue for goal-directed behaviour by virtue of them being moral agents, as discussed above. Again I leave for other times the discussion on the existence or not of free-will, and the possible impact of desire, emotion and behaviour on this discussion, and work on the pragmatic premise that all humans have free-will to some extent. Humans do possess culture, which other organisms do not. Here I am defining culture as “cultivation of the soul” or the betterment and refinement of individuals, possibly by formal or informal education. Further, in agreement with Premack and Hauser, I contend that culture is more than trivial behaviours that become population characteristics by social learning over generations . Premack and Hauser argue that human culture has a purpose to: “clarify what people value, what they take seriously in their daily lives, what they will fight for and use to exclude or include others in their group.” The question is then whether cultural inheritance, this teleological behaviour, is natural or does culture transcend the natural selection I have been discussing above?

It can be seen how early humans may have developed cultural behaviours, morals in the most primitive sense, that would have provided a group selection advantage in their foraging existence. Boehm proposes these cultural behaviours could be; group suppression of alpha male dominance, facilitating the sharing of foraged food, and a culturally based method for resolving social problems. Boehm then proceeds to detail a plausible, if not necessarily falsifiable, hypothesis that relies on culture development as a key component in group-selection. These steps or components involve natural selection and culture development, with; first biological selection providing the precursor to moral behaviour, then secondly the appearance of egalitarian bands (in the sense that weapon and tool use spread individual utility away from brute force dominance) as a product of intentional cultural invention. These steps would have profoundly affected natural selection through breeding preferences. Finally evolved altruistic tendencies, within the group, would have provided positive reasons for inclusive behaviour, in addition to punitive measures of exclusion that Boehm hypothesises to have developed earlier.

I would argue this hypothesis remains valid in the transition from foraging to early agricultural civilisations and to our present modern times. Environmental factors and cultural selection would have driven natural selection, culture becomes an additional part of the environment, where ill-fitting or non-adaptive cultural practices would have led to cultural extinction , as has been hypothesised for early middle-eastern and meso-american cultures.

Evolution by natural selection proceeds from gene inheritance from our parents. Does cultural inheritance transcend nature by learning from non-parents? It has been claimed that overall adaptive benefits of such learning outweigh the overall adaptive cost. Individuals in a population can copy a behaviour, which augments fitness. It has been suggested that prestige bias may be a suitably evolved heuristic that plausibly explains how on average adaptive, rather than maladaptive, behaviours will be copied from individuals who excel in at least one domain.

This relies on the supposition that those individuals will serve as cultural models and in formal civilisations will get themselves into prestigious positions. From these arguments it can be seen that culture neither transcends nor clashes with a modern sense of nature as evolving by natural selection.

This essay was first submitted in March 2015, by the author, as an assessment task for HPSC20002 “A History of Nature” as partial requirements for the award of a Post Graduate Diploma Arts (History and Philosophy of Science) at the University of Melbourne.

Bibliography

Primary Sources

Darwin, Charles L., “On the origin of species by means of natural selection, or The preservation of favoured races in the struggle for life,” in From so simple a beginning: the four great books of Charles Darwin, edited by Edward O. Wilson, 441-760. New York: W. W. Norton & Company, 2006.

Paley, William, Natural Theology: or evidence of the existence and attributes of the deity, collected from the appearances of nature. (edited with an introduction and notes by Matthew D. Eddy and David Knight) Oxford: Oxford University Press, 2006.

Secondary Sources

Bedau, Mark, “Where’s the good in teleology?” in Nature’s purposes: analyses of function and design in biology, edited by Colin Allen, Marc Bekoff, and George Lauder, 261-291. Cambridge: The MIT Press, 1998.

In 1864 James Clerk Maxwell published his essay, A dynamical Theory of the Electromagnetic Field[1], which contained what are now known as Maxwell’s equations: the four basic equations of the electromagnetic field[2]. In doing so he bought to a satisfactory pause an intense period of experiment and theorizing on the nature of electricity and magnetism. This period, I suggest, started in 1800 with the invention, by Alessandro Volta, of the voltaic pile, enabling, for the first time, the production of a continuous electric current. The following six decades were a fascinating montage of experiments and theories. This essay is not going to address the nature or ontology of the various fluid, wave, and field theories that emerged, and were argued over, in this period. I am going to discuss the speculation and experiments on electricity and magnetism carried out by three people: Hans Christian Ørsted (pictured above), André-Marie Ampère, and Michael Faraday, whose work launched a second industrial revolution, based on electric motors, generators, and the use of ‘electricity’.[3]

Prior to this period a change occurred, particularly in France, Germany, and Scotland, where experimental science was developed with an emphasis on quantitative and mathematical approaches. In France this became a dominant orthodoxy led by Antoine-Laurent Lavoisier and, in particular, Pierre-Simon de Laplace.[4] In response to this arose ‘Romantic’ approaches to natural philosophy[5]. The Romantic Movement, particularly those influenced by the Naturphilophie of Frederich Schelling[6], believed (amongst a number of key concepts) that speculation, not just mathematical reasoning, was a crucial part of experimental science. This essay will explore this role of speculative theorizing in the experimental pursuits of Ørsted, Ampère, and Faraday, with the intention of illustrating how these three very different personalities arrived at their great discoveries through ‘disciplined speculation’.

Ørsted and the ‘Unity of Nature’: a Discovery by chance?

There is a substantial argument in the literature whether Kant’s philosophy[7] or Schelling’s Naturphilosophie[8] had the greater impact on Ørsted’s scientific work. While this essay is not intended to address this unresolved discussion it is relevant to understand both these influences. Hans Christian Ørsted and his younger brother Anders Sandøe Ørsted had immersed themselves in Kantian philosophy as undergraduates at the University of Copenhagen.[9] After graduating in Pharmacy, including practical training in his father’s shop, Hans Christian submitted a doctoral thesis critiquing Kant’s Metaphysical Foundations of Natural Science, coming out in favour of his dynamical theory; the universe was a product of polar forces in perpetual interplay, and against the atomic theory of the world being constructed out of substantial entities (atoms). Ørsted’s thesis was not totally in agreement with Kant’s, his disagreement rested on Kant’s tracing his main concepts from empirical observation:

Kant had stopped at the outermost limit of reason: the mechanical-mathematical concept of matter was empty, but scientifically productive; the dynamical system by contrast offered concepts that made sense but were unable to be scientific.”[10]

By 1799 Ørsted had already defined his life’s scientific project; going on step further than Kant and making the dynamical theory scientific.

Ørsted spent three periods during 1801-2 with the German physicist Johann Wilhelm Ritter. This near six weeks of discussing galvanism and conducting experiments together laid the foundation of a friendship for life[11]. Ritter was a self-made scientist, influenced by, but not an acolyte of, Schelling, “who can be and in fact was in his time the prototype of a Romantic physicist,”[12] who nonetheless made significant contributions to science. He developed the accumulator, proved the existence of ultra-violet light after speculating that it must exist, because of nature’s polarity, and the recent discovery of infra-red light by Herschel, and demonstrated the unification of electricity and chemical changes – creating the new science of electrochemistry[13]. While Ritter’s experimental work was providing plausibility for the philosophical work of the Romantics, his continual imaginative, and in some cases wildly biased, speculations stimulated both excitement and caution in Ørsted.

After spending 1803 in Paris, Ørsted returned to Copenhagen in 1804 and was appointed a professor at the University in 1806. Ørsted continued to develop his experimental techniques as well as his theoretical ideas: in 1806 he published his theory of the “conflict of electricities”, in 1812 a book, Consideration of the Physical Laws of chemistry Deduced from the New Discoveries, which by analogy linked his “conflict” theory with the polarity of magnetism and chemical affinities work of Ritter, and in 1816 he developed a new, high current, galvanic cell, which he subsequently used in all his experiments.

Ørsted’s discovery was made during a series of lectures he gave from November 1819 to May 1820. The audience were not casual passers-by, rather they were men, advanced amateurs, with a sound foundation in natural philosophy – familiar with his thought experiment that strong electrical forces may affect a magnet. In the April lecture, he took a risk, and tried the experiment to the live audience. His thought experiment was vindicated when the switching on of the galvanic circuit deflected the magnetic needle. Once he managed time to confirm his results in July 1820, his results were published in a brief article, Experimenta, in the Danish journal Hesperus.[14] This he then sent to a selection of scholars in Europe to claim his priority over the discovery. The primary account of these events come from Observations on Electro-magnetism, an 1821 article that was simultaneously published in journals in London, Nuremburg, Geneva, and Paris.[15]

By the first week in September Biot and Arago in Paris could report complete verification of Ørsted’s results. Already Ampère had begun to build on Ørsted’s discovery, and on September 25 had announced to the Academie his discovery of the mutual forces between two parallel electric currents. On the first Monday in December, Ampère announced his theoretical description of the effect. As a consequence of his lack of mathematical training, Ørsted neither understood nor appreciated Ampère’s contribution. With uncharacteristic sarcasm, he later wrote:

The ingenuity with which this clever French mathematician has gradually changed and developed his theory in such a way that it is consistent with a variety of contradictory facts is very remarkable.[16]

What Ørsted was reacting to was Ampère’s own speculative nature. Never a follower off the dominant Laplacian orthodoxy that “electrical and magnetic phenomena are due to two different fluids which act independently of each other,”[17] Ampere had now found a question worthy of his attention.

Ampère’s Electrodynamics

In a manner similar to Ørsted’s discovery Ampère’s theoretical description was a result of a near lifetime of mental preparation. Ampère is generally acknowledged, despite Ørsted’s admonition, as the man who created the science of electrodynamics[18]. His achievements however are deeply rooted in his broader philosophical interests. Ampère was not representative of his era, particularly in French scientific circles. His idiosyncratic approach to his professional life meant that he had little impact on the society in which he lived. This is in marked contrast to both Ørsted and Faraday whose impact went far beyond their immediate contributions to science.

Born in Lyon on January 20, 1775, Ampère had an idyllic youth, growing up between the commercial bustle of Lyon and rural life of the small village Poleymieux where the family moved to in 1782. His father, Jean-Jacques Ampère, was guided partially by Rousseau’s educational philosophy, and Andre-Marie had no formal education (except in Latin), instead he was allowed to “learn from things and to do so according to spontaneous interest.”[19]

Contrary to Rousseau’s advice Andre-Marie was given early access to his father’s library and early impressions were made by French Enlightenment masterpieces such as Georges-Louis Leclerc, comte de Buffon’s Histoire naturelle, générale et particulière, Rousseau’s popular essays on botany, developing an interest in science and mathematics from Antoine Leonard Thomas’ Éloge de René Descartes and Denis Diderot and Jean le Rond d’Alembert’s Encyclopédie. His early felicity in Latin and Italian enabled the young Ampère to master the works of Leonhard Euler and Daniel Bernoulli and Joseph Louis de Lagrange’s Mechanique analitique.[20] This intellectually invigorating childhood was bought to an end on November 23, 1792 when his father was guillotined during the Reign of Terror.

The next ten or so years were spent in provincial tutoring roles, marrying and starting a family. In 1802 Ampère was appointed professor of physics at the Bourg École Centrale. It was in these years that Ampere displayed his early gift for speculative experimentation, particularly in physics and chemistry. Immediately prior to his move he gave a lecture at the December 24, 1801 meeting of the Academie de Lyon reveals his speculative scope, which included a “sketch of a vast system that connects all parts of physics” “examination of the influence of electricity on affinities and on the theory of light and colors.”[21] His ideas were highly speculative and underwent change when he moved to Paris in 1804, however they still retained the optimistic convictions of his youth. He believed that “scientific research would eventually reveal the true causal structure of nature” and that “science could at least reach a deeper level of reality than that described by phenomenological laws.”[22] Ampère held a central philosophical and methodological attitude that a fundamental fact would emerge from a tentative “explicative hypothesis” followed by experimental confirmation. Ampère called this indirect synthesis, and demanded that this experimental confirmation should include and emphasize the prediction of new phenomena that might not have been noticed otherwise.[23]

Between 1804 and 1820 Ampère advanced to the front in all three fields of mathematics, chemistry and physics, this despite being at methodological odds with the Laplacian mathematical and experimental program of science[24] that dominated in France at the time. For example Ampère was one of the few French scientists to take seriously Avogadro’s 1811 hypothesis that equal volumes of gas contain equal number of particles.[25] The doyen of French chemistry and co-chair with Laplace of the famous Societe d’Arcueil , Berthollet, resisted any atomic theory or speculation, maintaining that “for progress in [physics and chemistry]….to be real, one must bring to them a great deal of precision in facts.”[26] Ampère’s interest in Chemistry was concluded in 1816 with a publication of his classification scheme for elements, all 48 of them, increased from the 33 in Lavoisier’s 1787 list, with light and caloric no longer recognised as elements.[27]

Remembering that Arago repeated Ørsted’s experiment on September 4, 1820 to a sceptical French audience. The observation revealing two glaring exceptions to Laplacian physics; firstly electric and magnetic phenomena were not independent and secondly the perceived force acted tangentially to the current flow. In the same September and October Ampère produced attractions and repulsions between wires conducting electric currents. In 1826 Ampère produced a polished argument in his most influential publication, his Théorie des Phénomènes électro-dynamiques, describing the electrodynamic force[28].

Ampère was convinced that there existed two electric ‘fluids’, and argued that his theory was preferable to the Laplacian theory of Jean-Baptiste Biot and Siméon-Denis Poisson “because it could account for all magnetic, electromagnetic, and electrodynamic phenomena without postulating the existence of the magnetic fluids.”[29] Driven by this speculation Ampère proceeded over the next six years in a frenzy of iterations of experiment, measure, speculate, report. In the years 1820-1822 this was nearly on a fortnightly basis to the Académie in a race with Biot and his protégé Félix Savart. Most notable here was that Ampère’s experimental activities were guided by predetermined goals, only with rare exceptions did Ampère experiment in pursuit of novelty for it own sake. After 1827 Ampère’s attention shifted to other topics, although he did take note of Faraday’s discovery of induction in 1831.

Speculative Theorizing at the Royal Institution; Michael Faraday the greatest Experimental Philosopher

Sir Humphrey Davy was in his day, a star, one of the most famous chemists of the nineteenth century and a captivating lecturer at the Royal Institution. Davy was also a Romantic Scientist. He was committed to the view that “mere organization of matter could not give rise to life”[30] furthermore his lectures could only be understood in the context of the politics of the day: revolution and conservative reaction. This influence cannot go unnoticed when considering his successor at the Royal Institution; Michael Faraday. Faraday came from a poor background, but was nonetheless, akin to Ampère, a well- if self-educated man. By 1812 when he made Davy’s acquaintance he had well developed ideas on the nature of imponderable fluids and the nature of matter.[31]

Faraday was undoubtedly a brilliant and extraordinarily persistent experimentalist and in contrast to Ampère was extremely organized in documenting his experiments[32]. In the year following Orsted’s discovery he had repeated Orsted’s experiments and in doing so had, like Ampère, made his own discovery – that of electromagnetic rotation – leading to the invention of the electric motor in 1821.[33] The value of these experiments lie as much in the speculation that Faraday made of them. For example in his diary he wrote[34]:

The motion evidently belongs to the current, or what ever else it be, that is passing through the wire, and not the wire itself, except as the vehicle of the current.

From this point Faraday experimentally examined Ampère’s theory and, in addition, developed his own ideas on the nature of electricity. This led to his discovery in 1831 of the induction of electric current by magnetism in 1831. This discovery was not only the culmination of a long search it was the starting-point for almost thirty years of brilliant researches in electricity. These include his discovery of the ability of magnetic fields to change the polarization of light in 1845, which finally gave an experimental link to the unity of nature that was speculated on by for by Kant and Schelling. There is no doubt that Faraday was driven by a search for this unity of nature, as he wrote in 1845 :

I have long held an opinion, almost amounting to a conviction, in common I believe with many other lovers of natural knowledge, that the various forms under which the forces of matter are made manifest have one common origin; or in other words, are so directly related and mutually dependent that they are convertible, as it were, one into another, and possess equivalents of power in their action.[35]

Like Ørsted and Ampère, Faraday’s speculations drove his experimental directions even when they at first, or in the end, appeared unfruitful (from July 19, 1850):

Here ends my trial for the present. The results are negative. They do not shake my strong feeling of the existence between gravity and electricity, though they give no proof that such a relation exists.[36]

Disciplined Speculation

By examining the approaches of three key scientific figures, Ørsted, Ampère, and Faraday, I have attempted to illustrate the role that ‘disciplined’ speculation has played in the development of electromagnetism in the early half of the nineteenth century. In particular showing the influence of Kant and Schelling on all three physicists in conceptualizing their experiments, and at the same time illustrating that speculative science could be manifest in many nuanced forms as shown by the differing personalities and methods of the three examples given here.

Stauffer, Robert C., Speculation and Experiment in the background of Ørsted’s Discovery of Electromagnetism, Isis, 48(1), (1957), pp. 33-50.

Williams, L. Pierce, Michael Faraday, Chapman and Hall, London, 1965.

This essay was first presented in November 2014, by the author, as an assessment task for HPSC10001 “From Plato to Einstein” as partial requirements for the award of a Post Graduate Diploma Arts (History and Philosophy of Science) at the University of Melbourne.

What a time to ‘have to’ go and buy milk. Mid-morning Monday, July 21 1969, and my mother sends me up the street to get some milk. No big deal, you might say. However, a few hours prior to then, at 6:17 AEST that morning to be precise, a fragile craft, called the Eagle, had landed on the Moon – our Moon. Piloting it were two even more fragile beings, Neil Armstrong and Buzz Aldrin, and sometime that morning they would leave the Eagle and become the first people to ever walk on the Moon. The first people to ever walk on an other world – stop and think about that – what a stupendous human achievement – meanwhile I was running up the street to get the milk. Isn’t it interesting what we sometimes think of as important?

On that Monday I was home, special permission from the school because we had a television and my parents would be home, like so many others, to watch this historic event. All around Australia similar events were unfolding, those who could were at their homes watching, those who couldn’t were gathered together at schools to watch the event live. I can’t remember what others thought of the event at the time. I was enthralled, as were my close friends – despite living in suburban Australia, the space race was part of our intellectual growing-up.

By the completion of the first three (unmanned) Apollo missions on April 4, 1968 I was well engaged with the race to the moon. Interest in the American space program was a huge boost for my interest in science. This is despite the non-scientific nature of the Apollo program. Many scientists in the US decried the Apollo program as a waste of money. Instead there was a very vocal and influential support for unmanned, or robotic exploration, which could return greater scientific returns for less cost, and less risk. This debate culminated in the ‘forced’ inclusion of scientist-astronaut Harrison Schmitt on the final moon landing of the Apollo era. Of course this was a distinction that, as a primary school child, I was completely unaware.

As I, then safely returned from my milk expedition, watched, along with an estimated one-fifth of the world’s population, the moon-walk at 12:39 AEST, and heard those now famous words of Neil Armstrong’s it is safe to assume that I, amongst many others, was hooked by this spectacle. I was for ever changed in a very positive way. Cynics may deride what we gained from the moon race, or even the $25.4billion expenditure by the US to put 12 men on the Moon, and get them back safely. Some may even playfully question whether the the ‘eternal mystique of the moon could survive the onslaught of cold hard science.’ I still think that this was the greatest technological achievement in human history – one that will take some beating. In addition the view of the Earth from space, most famously photographed as ‘Earthrise’ by Bill Anders on board Apollo 8 on December 24, 1968 forever changed how we ‘see’ the Earth. This one image created an environmental awareness of the fragile Earth that has blossomed with time.

I will admit to feeling sorry for younger generations, living in a post Apollo world, without ever feeling the awe that this event.

Fifteen years on from the Apollo 11 landing I emerged from the subterranean bunker of the accelerator at Lucas Heights, home of the Australian Nuclear Science and Technology Organisation. It was dark, the stars were out, and Rob Elliman and I chatted as we clambered into a bright yellow jeep, a superannuated relic from Maralinga days, on our way to a dinner break before continuing a 48 weekend stint on the accelerator. “I wanted to be an astronaut,” I commented as I glanced up at the moon, “Yes, me too” says Rob, “Irony is probably so did most of our generation of physicists – and where did we end up?” “In a bunker pinging ions off semiconductor crystals,” I answer, “mmm,” completes Rob, as we roar off in the jeep. Impact is such a difficult concept to tie down.

Astrophysicists Robert Nemiroff and Teresa Wilson have undertaken what they consider to be the most sensitive and comprehensive search yet for time travelers from the future. The negative results they reported indicate that time travelers from the future may not amongst us.

Time travel has captured the public imagination for much of the past century. Modern fictional stories involving time travel to both the past and the future are not uncommon. Prominent examples are H G Well’s The Time Machine (1895), the Doctor Who television series (BBC, 1963 – present), Time Enough for Love (Robert Heinlein, 1974), The Flight of the Horse (Larry Niven, 1974) and the Back to the Future film trilogy (Steven Speilberg, 1985, 1989, 1990). These various stories present time travel as a technological problem to be solved – in the main ignoring, or skating over, the scientific and philosophic conundrums inherent in the concept of time travel.

Time travel at first seems reasonably plausible. Einstein’s theory of general relativity holds that we live in a 4-D world with time being just another dimension – like the other 3 familiar spatial ones. Then surely traveling in time is just technical matter – just like the other 3 dimensions?

Time travel to the future has a firm scientific footing – albeit an impractical one from the perspective of personally zipping to the future to check out how it will be. For example Special Relativity has clear sub-luminal solutions that correspond to time travel to the future. A famous example of this is Paul Langevin’s 1911 Twin Paradox. This was exploited fictionally by Robert Heinlein in his 1956 novel Time for the Stars, involving identical twins, one of whom makes a journey into space in a high-speed rocket aging far slower than the twin who remained on Earth. This twin paradox has been experimentally verified, for example, using precise measurements of atomic clocks flown on aircraft and satellites.

The science of time travel into the past however is far more controversial. The philosophy of time travel, when studied at anything greater than superficial level, is guaranteed to do your head in . This simple idea enthralls first-year undergraduate philosophy students as they grapple with time travel, freedom and deliberation. I will leave that comment for another time and instead will be challenging and suggest you read Ray Bradbury’s 1953 short story A Sound of Thunder, and answer, “How did they lay the path?”, then read Robert Heinlein’s 1959 short story By His Bootstraps, and answer “How did he start this causal loop?”, and finally be enthralled by Gregory Benford’s 1980 novel Timescape, which challenges the whole idea of determinism and time travel – brilliant reading.

Little, however, has been attempted to actually search for time travelers or evidence of them. I for one have always wondered why contemporary commentators of iconic historical events don’t include incredulous reports of unexpectedly large numbers of spectators or even participants. The Texas School Book Depository in Dallas, Texas, on November 22, 1963 would surely have been over run with time traveling tourists. As for the hill of Golgotha on April 7, 30AD – the crowds would be unimaginable.

“The Time Tunnel” James Darren and Robert Colbert

However Michigan Technological University astrophysicist, Robert Nemiroff and physics graduate student, Teresa Wilson, have made a “fun-but-serious effort” to find travelers from the future by searching through internet records. They didn’t search for evidence of time travelers from the past; they couldn’t think of a test that would distinguish such a person, if they existed, from someone who has a knowledge of the past, which is most people. The authors also said that “to the best of our knowledge, human technology to create a time machine does not exist in the past, so that time travelers from the past must originate in the future, assuming such technology is ever developed.”

A time-traveler from the future might have left once-prescient content on the internet that persists today, or such information might have been placed there by a third party discussing something unusual they had heard. So they picked two events of unique significance that would remain well known into the future. The two events were the discovery of Comet ISON and the choosing of the papal name of the newly pope of the Catholic Church, Jorge Mario Bergoglio.

Comet ISON was discovered by the International Scientific Optical Network (ISON) on September 21, 2012. Therefore it came into public usage on this date. Furthermore histories of bright comets like ISON are generally well kept by astronomical societies and journals around the world, therefore it would be expected to remain memorable well into the future. Any discussions or even mentions of “Comet ISON” before September 21, 2012 could be prescient evidence of time travelers from the future.

Similarly on March 16, 2013 the term “Pope Francis” came into the public awareness when Bergoglio became the first pope to choose the name “Francis”. As papal histories are well-recorded by all manner of persons and organisations, for all manner of reasons, it again would seem reasonable that the term “Pope Francis” would remain ‘memorable’ well into the future. Again discussions or mentions before March 16, 2013 of Pope Francis might indicate the presence of time travelers from the future.

The researchers searched using Google, Google+, Facebook and Twitter to search for the terms “Comet ISON” #cometison, “Pope Francis” and #popefrancis. The terms were only found to exist post the dates that they entered the public awareness. The researchers also used Google Trends to ascertain whether any searches were made for these terms prior the events – this also proved negative. This allows the conclusion that if there were time travelers from the future they did not passively leave evidence on the internet.

What about an active response? here the researchers were not interested in conversing with time travelers, rather allowing them to indicate that time travel has become possible in the future. They did this by creating a post on a publicly available online bulletin board in September 2013, asking for one of two hashtag responses on or before August 2013. A message containing the term #ICannotChangethePast2 would indicate that time travel to the past is possible but that the time traveler believes that they do not have the ability to alter their past. Conversely a message containing the term #ICanChangethePast2 would indicate that the time traveler could change the past. Nemiroff and Wilson in their paper wisely steer away from the philosophical importance of these two stances and instead just look for the experimental results. At the time of writing no prescient tweets or emails were received.

The authors conclude:

Although the negative results they reported may indicate that time travelers are not amongst us and cannot communicate over the internet, they are not proof. It may be physically impossible for time travelers to leave any lasting remnants of their stay in the past, including even non-corporeal information remnants on the internet, or it may be physically impossible for us to find such information as that would violate some yet-unknown law of physics.

This could explain why there are no reports of huge crowds of time travelers at such historical events as the crucifixion of Christ or the assassination of President Robert Kennedy. Furthermore: “Time travelers may not want to be found, or may be good at covering their tracks.” Finally “our searches were not comprehensive, so that even if time travelers left the exact event tags […] we might have missed them due to human error.”

This is certainly a sensitive and comprehensive experiment and I think encouraging to others to develop and extend this to look for time travelers from the future.

Canadian astronaut Chris Hadfield gestures after the Russian Soyuz space capsule landed some 150 kilometers (94 miles) southeast of the town of Dzhezkazgan in central Kazakhstan, Tuesday, May 14, 2013. The Soyuz space capsule carrying a three-man crew returning from a five-month mission to the International Space Station landed safely Tuesday on the steppes of Kazakhstan. (AP Photo/ Sergei Remezov, Pool)

An enduring image of an ‘astronaut’ was created for the public by NASA, Time magazine, and Tom Woolf’s The Right Stuff. These caricatures of the original seven American astronauts, the so-called Mercury-7, chosen to assert American supremacy over the communist threat of Sputnik have seemingly endured way past their use by date. A resurgence in interest in ‘astronauts’ was made almost single-handedly in the English speaking world by the Canadian Colonel Chris Hadfield. His much publicised exploits, through the media of YouTube, as commander of expedition 35 aboard the International Space Station made obvious a change from May 5, 1961 when Alan Shepherd rode Freedom 7 into the history books.

In his autobiographical An Astronaut’s Guide to Life on Earth, the now retired Hadfield provides one of the most readable and honest stories of his journey from being a glider in the Royal Canadian Air Cadets in 1975 to commanding the international space Station in 2013 – after ‘only’ 21 years of astronaut training. He candidly describes the effort and training to get to being a modern astronauts – studying, practicing, learning, waiting, preparing for the worst – then being flexible enough to deal with the unexpected. What I liked is his can do approach as explained in his response to the 1969 Apollo 11moon landing and wanting to become an astronaut:

I also knew, as did every other kid in Canada, that it was impossible. Astronauts were American. NASA only accepted applications from U.S citizens, and Canada didn’t even have a space agency.

I was old enough to understand that getting ready wasn’t simply a matter of playing “space mission” with my brothers in our bunk beds, underneath a big National geographic poster of the Moon. But there was no program I could enroll in, no manual i could read, no one to ask. There was only one option, I decided. I had to imagine what an astronaut might do if he were 9 years old, then do exactly the same thing.

His laconic, sometimes counter-intuitive advise is always presented with a wealth of evidence to support his lesson. His Frank assessment of the impact of his dream on the rest of his family make a good reminder for all the corporate males who neglect family events for yet another sales meeting.

Hadfield’s book is a great read and compares favorably with two of my other notable astronaut autobiographies.

At the age of five I was devastated when my mum said to me that I could not become an astronaut. She dashed my probably overly enthusiastic boyish exuberance regarding space exploration explaining that I would need to be both American and a military pilot. Despite this early reality check, and taking a different path to Hadfield, I followed the Apollo program with enthusiasm – racing home from primary school to watch the historic moon-walk of Armstrong and Aldrin.

Of those Apollo 11 voyagers only Michael Collins put pen to paper to capture his journeys as an astronaut in the vivid and captivating Carrying the Fire. Collins displays a fine writing style and wry sense of humor. He wrote from an earlier time than Hadfield. Collins was part of the “Apollo fourteen“, the third group of astronauts, after being unsuccessful for selection in the second group, the “New Nine”.

Collins adroitly describes his emergence as an astronaut, training for and flying on Gemini 10 with John Young and participating in the US’s third “space walk”. Collins was originally picked as part of the Apollo 8 crew. He was replaced by Jim Lovell when a bone spur was discovered on his spine, requiring surgery. He relates his feelings at losing this opportunity, Apollo 8 became the bold second manned Apollo flight all the way to circle the Moon, and then gaining his place in history as the Command Module Pilot of Apollo 11.

Other books from this era that deserve a mention are Deke Slayton’s Deke and John Glenn’s A Memoir. Both of these were of the Mercury 7. Glenn’s memoir is so straight that it strains the reader’s credulity. Extraordinarily enough it is all John Glenn – astronaut, married family man, US Senator – it is definitely one of an uncomplicated patriotic kind. Slayton was different, grounded with a heart irregularity and instead of flying became the first Chief of the Astronaut Corps and selected the crews who flew Gemini, Apollo and Skylab missions. His book, written as he was dying from cancer, covers the full space race period up to his retirement post the start of the Space Shuttle era.

My third must read astronaut autobiography though is Mike Mullane’s Riding Rockets.

This in my mind is a minor classic, again so different to both Collins and Hadfield. Mullane was part of the Space Shuttle generation of astronauts, the 1978 class of TFNGs (the Thirty Five New Guys), a group that included the first female NASA astronauts. This book contains an emotional level and cadence not pictured in other first hand astronaut memoirs.

Mullane, a self-confessed inhabitant from planet ‘arrested development’ shares his growing pains in recognising that women could be colleagues and brilliant astronauts at that. His brutally honest depiction of losing his friend Judy Resnik in the Challenger disaster due to NASA hubris. Mullane describes in vivid detail the subsequent appalling bureaucratic treatment of the family members who were present at the disastrous launch. His own experience prior to this when STS-27 suffered near catastrophic heat shield damage from launch damage makes this description all the more poignant.

The whole fateful uncertainty of the Space Shuttle era, the “glory and the folly” of this remarkable era in human exploration of near space is wittily and cuttingly told. If you aren’t both amazed and angered in reading this memoir than I suggest you go back and read it again.

Forty years ago we last stepped foot on the Moon. Currently, with our occupation of the low-Earth orbit international space station, we are space residents. In the visionary Mission to Mars (National Geographic Society, 2013), moon-walker, space advocate, Gemini 12 and Apollo 11 astronaut, Buzz Aldrin, challenges us to take a further step and colonise Mars. Aldrin advocates bypassing the Moon and instead make progressive steps to mars via comets, asteroids and Mar’s moon Phobos. From Phobos astronauts using remote controlled robots will prepare the Mars landing site and habitats. Aldrin states that regular space travel to Mars would be too expensive with Apollo-style modular expendable components, instead favoring a gravity-powered spaceship cycling permanently between the Earth and Mars. Although strongly advocating for a US led enterprise Aldrin, thankfully, sees cooperation, rather than competition with China, Europe, Russia, India and Japan as being the way forward.

Currently we have the Dutch company Mars One are recruiting people to be part of a permanent human settlement on Mars by 2023; the US commercial firm SpaceX have their Red Dragon proposal to put a sample-return mission to Mars by 2018 (seen as a necessary precursor by NASA to a human exploration); the Chinese have a long term plan for non-crewed flights to Mars by 2033 and crewed phase of missions to Mars during 2040-2060. Although the funding mechanisms and motivations are different these plans all make use of one idea or more from book The Case for Mars (Free Press, 1996, 2011). Written by aerospace engineer and founder of the Mars Society, Robert Zubrin, it is a meticulous and plausible way to settle Mars. Aldrin’s book is more broad and his ideas fit well with current technologies, US aspirations for asteroid capture and exploitation, and NASA’s focus on the planet Mars.

The veteran Mars Exploration Rover, Opportunity, is still surprising us with its discoveries, more than nine years after the completion of its 90 day primary mission. While the sprightly youngster Curiosity is regularly rewriting and deepening our understanding of Mars – still only half-way through its three year primary mission.

Appreciating what it takes to get a scientific laboratory wheeling its way across Mars is enticingly portrayed in another new book Red Rover (Basic Books, 2013). This first-hand account is written by Roger Wiens, lead scientist for ChemCam – the laser zapping remote chemical analytical instrument onboard the rover Curiosity. It covers his involvement in robotic space exploration from his initiation in 1990 on the NASA Genesis probe to the joyous moment when Curiosity zapped its first rock in early 2013. If this piques your curiosity then the earlier Roving Mars: Spirit, Opportunity, and the exploration of the Red planet (Scribe, 2005) is well worth tracking down. A passionate insight into the 2004 twin rover Spirit and Opportunity mission by Steve Squyres, the mission’s scientific principal investigator.

These robotic missions are prudent preparatory steps to Aldrin provides and engaging overview of the technical, economic and political reasons for humanity to journey to Mars. It has been a self-professed vision of his since his return from the Moon. This books, though, represents Aldrin’s first attempt to put the whole of the puzzle, his Unified Space Vision, together in one place. For a more technical read on the settlement and exploration of Mars then Zubrin’s revised and updated The Case for Mars (Free press, 2011) and Marswalk One: first steps on a new planet (Praxis, 2005) by astronautical historian, writer and designers David Shayler, Andrew Salmon and Michael Shayler are also recommended. Mission to Mars though is a clarion call, essential reading for anyone interested in humanity’s next big step.

Curiosity; seen as a hazard to society in the classical world, early Christianity condemned it as a sin, and now, in the modern world, it is seen as an essential part of human nature. Somewhere between the late 1500s and 1700s attitudes in Europe changed. The change, according to Phillip Ball in Curiosity, was gradual and far from obvious. In this fascinating book Ball describes how curiosity was transformed over this period from a sense of ‘wonder’ through natural philosophy to the professional curiosity of modern day science.

The emphasis in Curiosity is not on some progress of science, gradual or otherwise, triumphing over the medieval church. Rather Ball presents a nuanced and almost chaotic extended period of change. The change is one from the ‘scholastic’ belief that truth was a “question of authority and status: a fact was verified if it could be found in an authoritative text, but otherwise it was mere hearsay” to experimental philosophers who imagined methods for turning facts into laws. These facts were either from observation of nature or via the startling new practice of ‘experiment’.

So we arrive at the modern world not by a straight line. Rather it is reached by a plethora of men (invariably) who held what from a modern perspective are competing and confounding ideas at the one time. The strength of Ball’s book lies in his ability to turn his research into astute and captivating observations of these people and what they perceived as they unwrapped nature. At the same time chronicling the how the invention and use of novel instruments, the telescope, microscope and the air-pump, rocked the beliefs about the world and what were the acceptable limits to curiosity.

Curiosity is a fascinating insight into what frames the questions that scientists ask, it is essential reading for anyone interested in understanding how science shapes, and is shaped, by society.

Stephen Hawking: My brief history

Stephen Hawking’s memoir is a brief amble through his life from early childhood to date. Hawking’s memoir does cut through some of the hype that could surround someone who is “possibly the best-known scientist in the world.” At the same time he presents many of his remembrances with a quaint gentle sort of humor, that at the same time reminds you that this is no ordinary human. Two of my favorites are:

I was born on January 8, 1942, exactly three hundred years after the death of Galileo. I estimate, however that about two hundred thousand other babies were also born that day. I don’t know whether any of them was later interested in astronomy.

The first scientific description of time was given in 1689 by Sir Isaac Newton, who held the Lucasian chair at Cambridge that I used to occupy (though it wasn’t electrically operated in his time).

The memoir for me reflected almost two personalities. The first an almost languid bored life; that becomes more focused at the age of twenty-one when Hawking is told that he has motor-nurone disease and may only live a few more years.

Even as a child Hawking does not seem to demonstrate (at least not in the memoir) the genius that we now associate his name with. As an undergraduate at Oxford he claims to have worked about a thousand hours in the three years, an average of an hour a day. Affecting the air of complete boredom of the time and ascribing to the prevailing attitude at Oxford that was very anti-work.

You were supposed to be either brilliant without effort or accept your limitations and get a fourth-class degree.

The thought of an early death, a cloud hanging over his future were changed moving to graduate studies at Cambridge and there meeting and getting engaged to Jane Wilde. From this point on the memoir gather intellectual pace. Hawking takes us through being awarded a research Fellowship at Caius College, finishing his PhD, getting married, and his early work on gravity waves, the big bang and black holes.

The latter half of the book can seem cursory, focusing on Hawking’s work at Caltech and then back at Cambridge. During this period he had two more children and then eventually split with his first wife Jane, and marrying Elaine Mason his nurse. These personal stories were adequately covered – requiring none of the histrionic embellishments as you might find in a ‘celebrity’ autobiography. Hawking’s work stands as a singularly intellectual triumph.

This book is also fortunately light on obsessive detail that sometimes clouds the historical biography. Instead we have an enjoyable insightful memoir, accessible to all, of one of the most brilliant minds in modern times. I recommend it to all science buffs and those interested in those who have made a personal difference on a cosmological difference in our time.

Just on 30 years ago I came across an intriguing book, the then relatively unknown Gaia: A new look at life on Earth (OUP 1979) by an ‘independent scientist’, J.E. Lovelock. My earliest impression of it may seem surprising to many people now. I was infuriated. I was infuriated not by Lovelock’s hypothesis per se, but by what I impugned was his underlying purpose behind proposing this model.

Gaia was presented by Lovelock as a fifteen year quest to substantiate the model:

“in which the Earth’s living matter, air, oceans, and land surfaces form a complex which can be seen as a single organism and which has the capacity to keep our planet a fit place for life.”

That was an intriguing hypothesis. The infuriating part I found was the implication that thanks to Gaia our fears of pollution-extermination may be unfounded. In particular I found the logic of chapter 7 (Gaia and Man: the problem of pollution) to be pernicious. On the untested assumption that Gaia did exist, and in the form suggested by Lovelock, he proposed the idea “there is indeed ample evidence that pollution is as natural to Gaia as is breathing to ourselves and most other animals.” The philosopher in me took umbrage at his glib jibes at the various current environmental perspectives – chiding them for their naive perspectives.

It took at least another careful read of Gaia before I appreciated Lovelock’s perspective, which his follow-on books left the reader in no doubt. Gaia: a new look at life on Earth was followed by The Ages of Gaia in 1988 (both books were revised for 2nd editions in 1995), Lovelock’s autobiography Homage to Gaia: the life of an independent scientist (OUP, 2000) was followed by the more strident The Revenge of Gaia (Allen Lane, 2006) and The Vanishing Face of Gaia: a final warning (Allen Lane, 2009). There was still the unanswered question – does Gaia exist?

The reaction to the question of Gaia’s existence is one of the the great legacies of Lovelock’s book. Lovelock’s eloquence and novelty of hypothesis inspired many responses and continues to provoke fierce debate. It has taken some time for a concise critical scientific analysis, in a form accessible to the interested educated reader, of the major assertions and arguments underpinning the Gaia hypothesis to be written. Toby Tyrrell, Professor of Earth Systems Science at the University of Southampton has managed to deliver this.

Tyrrell gets down to business in a succinct manner. He distills the Gaia hypothesis to three main facts, or classes of facts that Lovelock has advanced in support of the hypothesis. They are:

The environment is very well suited to the organisms that inhabit it

The Earth’s atmosphere is a biological construct whose composition is far from expectations of (abiotic) chemical equilibrium, and

The earth has been a stable environment over time, despite external forcings.

Tyrell reminds us that the Gaia hypothesis is not the only one that looks at the relationship between life and environment on Earth. So in his book he takes these three facts and examines the evidence for them in the light of two other competing hypotheses; the Geological and the Coevolutionary.

The Geological hypothesis was the dominant paradigm among geologists and other scientists at the time Gaia was written. According to this way of thinking life has been a passenger on Earth, helplessly buffeted around by a mixture of geological forces and astronomical processes. Life adapts to this environment but does not itself affect it.

The Coevolutionary hypothesis assumes that there is two way traffic: not only does the nature of the environment shape the nature of life, life also acts as a force that shapes the planetary environment. There is one obvious, although too easily missed, difference between the coevolution of life and climate and of two life forms; such as the interactions between predators and prey and between hosts and parasites for example. There is no equivalent cumulative evolutionary process that builds better-adapted oceans or atmospheres over time. This means that this hypothesis makes no claims about the wider outcomes of the interaction. The Gaia hypothesis suggest that the outcome of the interaction has stabilized the planet and kept it favorable for life; Coevolution is neutral about such claims.

Having carefully framed the questions, and presented some viable alternatives, Tyrell then very eloquently and elegantly (in that scientific sense) looks at the evidence for which hopythesis fits the facts best.

In doing this he takes us on quite an adventure. We look at extremophiles and life over the glacial and interglacial eons, because as Lovelock states “the most important property of Gaia is the tendency to optimize conditions for all terrestrial life”. In other chapters by examining, over time, deep sea plankton nitrogen and phosphorus ratios, and atmospheric oxygen and methane levels Tyrrell convincingly demonstrates that the earth’s atmosphere is a biological construct. Having established that life has the power to shape the Earth he then examines what are the environmental alterations are produced.

By carefully examining two evolutionary innovations that have most obviously shaken the world: (i) the evolution of oxygen-yielding photosynthesis, and (ii) the colonization of land by the first forests, we find that life has always changed to exploit and closely fit it, as it must because of evolution. Finally by examining the rocks, glaciation levels, seawater chemistry and the ups and downs of greenhouse gases for the past 500 million years Tyrell concludes that Gaia has not helped to keep the Earths environment stable – because the research shows that the environment has not been stable.

The final chapter is a masterful example of clear thinking. Tyrrell revisits the road travelled, weaves the strands together and draws his conclusion that Gaia is a fascinating but flawed hypothesis. Tyrrell does not stop there he proposes new “intriguing research topics” that have arisen as a consequence of evaluating the Gaia hypothesis. As well he reminds us why this evaluation is important: planetary management requires solid understanding, Gaia imbues undue optimism, and the need for an unbiased worldview.

On Gaia: a critical investigation of the relationship between life and Earth (Princeton University Press, 2013), is a great contribution to an important scientific, and human debate. Toby Tyrrell demonstrates both a fine grasp of science, and science communication. An intelligent reader will find this book rewarding, for both these reasons.

On Gaia: A Critical Investigation of the Relationship between Life and EarthToby Tyrrell

The connectome module as a 3D graph. Cell types with stronger connections are positioned closer to each other, using an algorithm. Three spatially segregated groups are observed that closely match the pathways identified through clustering (colouring of spheres). The dominant direction of signal flow is oriented into the page.

The human brain has 100 billion neurons, connected to each other in networks that allow us to interpret the world around us, plan for the future, and control our actions and movements. Mapping those networks, creating a wiring diagram of the brain could help scientists learn how we each become our unique selves. Understanding the brain and all its connections is Connectomics – a word soon to become as familiar as ‘genetics’.

In three papers appearing in Nature, scientists report their first step toward this goal: Firstly using a combination of human and artificial intelligence, they have mapped all the wiring among 950 neurons within a tiny patch of the mouse retina. While a second group look at a classic problem of neural computation – the detection of visual motion – in the eye of a fruitfly.

The eye of the mouse

The retina is technically part of the brain, as it is composed of neurons that process visual information. Neurons come in many types, and the retina is estimated to contain 50 to 100 types, but they’ve never been exhaustively characterised. Their connections are even less well known. Neurons in the retina are classified into five classes: photoreceptors, horizontal cells, bipolar cells, amacrine cells and ganglion cells. Within each class are many types, classified by shape and by the connections they make with other neurons.

In this study, the research team focused on a section of the retina known as the inner plexiform layer, which is one of several layers sandwiched between the photoreceptors, which receive visual input, and the ganglion cells, which relay visual information to the brain via the optic nerve. The neurons of the inner plexiform layer help to process visual information as it passes from the surface of the eye to the optic nerve.

By mapping all of the neurons in a 117-micrometre-by-80-micrometre patch of tissue, researchers were able to classify most of the neurons they found, based on their patterns of wiring. They also identified a new type of retinal cell that had not been seen before. To map all of the connections in this small patch of retina, the researchers first took electron micrographs of the targeted section generating high-resolution three-dimensional images of biological samples.

Developing a wiring diagram from these images required both human and artificial intelligence. First, the researchers hired about 225 German undergraduates to trace the “skeleton” of each neuron, which took more than 20,000 hours of work (a little more than two years).

To flesh out the bodies of the neurons, the researchers fed these traced skeletons into a computer algorithm, which expands the skeletons into full neuron shapes. The researchers used machine learning to train the algorithm, known as a convolutional network, to detect the boundaries between neurons. Using those as reference points, the algorithm can fill in the entire body of each neuron.

The only previous complete wiring diagram, which mapped all of the connections between the 302 neurons found in the worm Caenorhabditis elegans, was reported in 1986 and required more than a dozen years of tedious labor.

Classifying neurons

Wiring diagrams allow scientists to see where neurons connect with each other to form synapses – the junctions that allow neurons to relay messages. By analyzing how neurons are connected to each other, researchers can classify different types of neurons.

The researchers were able to identify most of the 950 neurons included in the new retinal-wiring diagram based on their connections with other neurons, as well as the shape of the neuron. A handful of neurons could not be classified because there was only one of their type, or because only a fragment of the neuron was included in the imaged sample. In this study, the researchers identified a new class of bipolar cells, which relay information from photoreceptors to ganglion cells. However, further study is needed to determine this cell type’s exact function.

It should be noted, and is by the researchers, that this analysis provides contact information, but not synaptic strength. The absence of a contact always indicates a lack of synaptic connection. If our goal is to ‘learn how we each become our unique selves’ then synaptic strength is an important measure – an indication of cognitive abilities related to memory and learning.

The project of classifying types is not completed, but this work shows that it should be possible, in principle, if it’s scaled up to a larger piece of tissue. To analyse an entire mouse brain in this way would require several billion hours of human attention, that is 225 students for 200,000 years – suggesting that such a task is incredibly ambitious. The researchers are convinced, as are many other neurobiologists, that mapping and decoding the connectome will revolutionize brain research.

Ever tried to swat a fruitfly?

An example of the experimental set-up to measure the neuron activity and visual field of the fruitfly. Image is courtesy of Matthew S. Maisak, Juergen Haag et al, Max Planck Institute of Neurobiology.

A fruitfly is effective at dodging predators and rapidly navigating during flight, so it is a perfect insect model for visual motion detection. It is easy to make simple models of motion detection: photoreceptor cells cannot detect direction but downstream (towards the brain) neurons, called tangential cells, do respond to the direction of movement. Somewhere in between lies the neural mechanism that creates this discrimination.

The authors here developed a semi-automatic method constructing a connectome of 379 neurons and 8,637 chemical synaptic contacts then matched these reconstructed neurons to cells types using light microscopy. Using this model and supportive evidence using some very elegant optical microscopy, which forms the third paper in Nature, the researchers identified the cells that constitute a motion detection circuit.

What these studies demonstrate is the power of connectomes to uncover key insights into the detail of the brain’s processing circuitry. At present the scales seem ‘mind-boggling’, any serious work would seem to be the province of well-heeled laboratories. However, as pointed out in a NatureNews & Views article, “the researchers have stressed that the connectomic reconstructions will be public resources” making these invaluable resources for neuroscience.

Concrete and cement. Words synonymous with solidity and well, staidness. Cement is probably the most ubiquitous building material of the late 20th century – yet it may provide some very 21st century surprises. Researchers at the University of Alicante have developed a cement material incorporating carbon nanofibres in its composition, turning cement into an excellent conductor of electricity. Similarly scientists from the USA, Japan, Finland and Germany have unraveled the formula for transforming liquid cement into liquid metal – opening up its use in the profitable consumer electronics marketplace for thin films, protective coatings, and computer chips.

The warmer side of concrete

Concrete is a composite material composed of coarse granular material (the aggregate or filler) embedded in a hard matrix of material (the cement or binder) that sets and hardens independently, filling the space among the aggregate particles and gluing them together. Concrete made from such mixtures was first used in Mesopotamia in the third millennium B.C. and later in Egypt. It was further improved by the Ancient Macedonians and three centuries later on a large scale by Roman engineers. They used both natural pozzolans (such as pumice) and artificial pozzolans (ground brick or pottery) in these concretes. Many excellent examples of structures made from these concretes are still standing, notably the huge dome of the Pantheon in Rome and the massive Baths of Caracalla. The vast system of Roman aqueducts also made extensive use of hydraulic cement.

Conventional concrete is a poor conductor of electricity. To obtain a cement-like compound that is effective as a heating element, it then should have a low resistivity. This has been achieved by the addition of conductive materials such as carbon fibres, for example. This new technology, developed and patented by the University of Alicante Civil Engineering Department’s Research Group in Multifunctional Concrete Conductors, allows, among other functions, the material to heat up due to the passage of current.

The technology allows buildings’ premises to heat or prevents the formation of ice on infrastructure, such as highways, railways, roads, airstrips and other elements. In this way, a new conductive compound with much more interesting properties is achieved since it keeps the structural properties of concrete and does not compromise the durability of the structures themselves. This new product has a great versatility, since any existing structure or surface can be coated with it, keeping thermal control in it by applying continuous electric current. At present, the research group has developed trials to test the technology in plasters with carbonaceous materials. These tests have given very satisfactory results, obtaining optimal properties of heating the material with minimum energy consumption.

21st century metallic-glass cement

A team of scientists from the USA, Japan, Finland and Germany have made a metallic-glass cement. This new material has lots of applications, including as thin-film resistors used in liquid-crystal displays – basically the flat panel computer monitor that you are probably reading this from at the moment. The team have demonstrated how make and understand the cement-to-metal transformation, which has positive attributes including better resistance to corrosion than traditional metal, less brittleness than traditional glass, conductivity, low energy loss in magnetic fields, and fluidity for ease of processing and molding. Previously, only metals have been able to transition to a metallic-glass form. Cement does this by a process called electron trapping, a phenomena only previously seen in ammonia solutions. Understanding how cement joined this exclusive club opens the possibility of turning other solid normally insulating materials into room-temperature semiconductors.

This phenomenon of trapping electrons and turning liquid cement into liquid metal was found recently, but not explained in detail until now. Now that the conditions needed to create trapped electrons in materials are known, other materials can be developed and tested to find out if we can make them conduct electricity in this way. The results were reported in the journal the Proceeding of the National Academy of Sciences in the article “Network topology for the formation of solvated electrons in binary CaO-Al2O3 composition glasses.”

Close-up visualizations (A) and (B)of single-particle electron states in the 64CaO glass. (C) Simulation box and the electron spin-density of the 64CaO glass with one oxygen subtracted at h2—that is, with two additional electrons. The two electrons have the same spin and they occupy separate cavities, h1 (boundary, also shown in B) and h2 (center, location of removed oxygen), which are separated by 12 Å from each other. (D) Cage structure around the spin-density of one electron cor- responding to the h2 cavity (close-up from C). Al, gray; Ca, green; O, red. Image credit: Argonne National laboratory.

The team of scientists studied mayenite, a component of alumina cement made of calcium and aluminum oxides. They melted it at temperatures of 2,000 degrees Celsius using an aerodynamic levitator with carbon dioxide laser beam heating. The material was processed in different atmospheres to control the way that oxygen bonds in the resulting glass. The levitator keeps the hot liquid from touching any container surfaces and forming crystals. This let the liquid cool into glassy state that can trap electrons in the way needed for electronic conduction.

The scientists discovered that the conductivity was created when the free electrons were “trapped” in the cage-like structures that form in the glass. The trapped of electrons provided a mechanism for conductivity similar to the mechanism that occurs in metals. To uncover the details of this process, scientists combined several experimental techniques and analyzed them using a supercomputer.

These developments are sure to provide an impetus for a new look at old building material.

Since Erwin Schrödinger’s famous 1935 cat thought experiment, physicists around the globe have tried to create large scale systems to test how the rules of quantum mechanics apply to everyday objects. Scientists have only managed to recreate quantum effects on much smaller scales, resulting in a nagging possibility that quantum mechanics, by itself, is not sufficient to describe reality.

Researchers Alex Lvovsky and Christoph Simon from the University of Calgary recently made a significant step forward in this direction by creating a large system that displays quantum behaviour, publishing their results in Nature Physics.

Understanding Schrödinger’s cat

Quantum mechanics is without doubt one of the most successful physics theories to date. Without it the world we live in would be remarkably different: driving and shaping our modern world making possible everything from computers, mobile phones, nuclear weapons, solar cells and our everyday appliances. At the same time it presents us with conundrums that are at the far end of reason; challenging even the greatest minds to comprehend.

In contrast to our everyday experience, quantum physics allows for particles to be in two states at the same time — so-called quantum superpositions. A radioactive nucleus, for example, can simultaneously be in a decayed and non-decayed state.

Schrödinger’s Cat, many worlds interpretation, with universe branching.Visualization of the separation of the universe due to two superposed and entangled quantum mechanical states. (image credit: Christian Schirm)

Applying these quantum rules to large objects leads to paradoxical and even bizarre consequences. To emphasize this, Erwin Schrödinger, one of the founding fathers of quantum physics, proposed in 1935 a thought experiment involving a cat that could be killed by a mechanism triggered by the decay of a single atomic nucleus. If the nucleus is in a superposition of decayed and non-decayed states, and if quantum physics applies to large objects, the belief is that the cat will be simultaneously dead and alive.

Schrödinger’s thought experiment involves a (macroscopic) cat whose quantum state becomes entangled with that of a (microscopic) decaying nucleus. While quantum systems with properties akin to ‘Schrödinger’s cat’ have been achieved at a micro level, the application of this principle to everyday macro objects has proved to be difficult to demonstrate. The experimental creation of such micro-macro entanglement is what these authors successfully achieved.

Photons help to illuminate the paradox

The breakthrough achieved by Calgary quantum physicists is that they were able to contrive a quantum state of light that consists of a hundred million photons and can even be seen by the naked eye. In their state, the “dead” and “alive” components of the “cat” correspond to quantum states that differ by tens of thousands of photons.

While the findings are promising, study co-author Simon admits that many questions remain unanswered.

“We are still very far from being able to do this with a real cat,” he says. “But this result suggests there is ample opportunity for progress in that direction.”

Seeing quantum effects requires extremely precise measurements. In order to see the quantum nature of this state, one has to be able to count the number of photons in it perfectly. This becomes more and more difficult as the total number of photons is increased. Distinguishing one photon from two photons is within reach of current technology, but distinguishing a million photons from a million plus one is not.

Decoherence: the emergence of the classical world from the quantum

Why don’t we see quantum effects in everyday life? The current explanation is that it is to do with decoherence.

Physicists see quantum systems as fragile. When a photon interacts with its environment, even just a tiny bit, the superposition is destroyed. This interaction, could be as a result of measurement or an observation, or just a random interaction. Superposition is a fundamental principle of quantum physics that says that systems can exist in all their possible states simultaneously. But when measured, only the result of one of the states is given.

This effect is known as decoherence and it has been studied intensively over the last few decades. The idea of decoherence as a thought experiment was raised by Erwin Schrödinger, in his famous cat paradox. Unfortunately for non-physicists decoherence only provides an explanation for the observance of wave function collapse, as the quantum nature of the system “leaks” into the environment. It does not tell us where the line, if such one does exist, between the quantum and everyday is.

Although Schrodinger’s thought experiment was originally intended to convey the absurdity of applying quantum mechanics to macroscopic objects, this experiment and related ones suggest that it may apply on all scales.

If you are interested in the history and foundation of quantum mechanics then I highly recommend Quantum: Einstein, Bohr and the great debate about the nature of reality, by Manjit Kumar (2009), and The Age of Entanglement: when quantum physics was reborn, by Louisa Gilder (2008). Both are well-researched and captivating brilliant accounts of science science and scientists.

Imagine, Lake Vostok is covered by more than 3,700 metres of Antarctic ice. Devoid of sunlight, it lies far below sea level in a depression that formed 60 million years ago, when the continental plates shifted and cracked. Few nutrients are available. Yet scientist, led by Scott Rogers, a Bowling Green State University professor of biological sciences, have found a surprising variety of life forms living and reproducing in this extreme environment. A paper published June 26 in PLOS ONE details the thousands of species they identified through DNA and RNA sequencing.

What lies sealed beneath the glacial ice?

Antarctica, 35 million years ago, had a temperate climate and was inhabited by a diverse plants and animals. About 34 million years ago, a huge drop in temperature occurred and ice covered the lake, when it was probably still connected to the Southern Ocean. This lowered the sea level by about 100 metres, which could have cut off Lake Vostok from the ocean. The ice cover was intermittent until a second big plunge in temperature took place 14 million years ago, and sea level dropped even farther.

An artist’s representation of the aquatic system scientists believe is buried beneath the Antarctic ice sheet. (Credit: Zina Deretsky, NSF)

As the ice crept across the lake, it plunged the lake into total darkness and isolated it from the atmosphere, and led to increasing pressure in the lake from the weight of the glacier. While many species probably disappeared from the lake, as indicated by Rogers’ results, some seem to have survived.

Rogers and his colleagues examined core sections from the ice above Lake Vostok that were extracted in 1998. At the time, no one had reached the actual lake, a feat that was achieved only last year. But the drilling had gone deep enough to reach a layer of ice at the bottom of the sheet that formed as lake water froze onto the bottom of the glacier where it meets the lake. The team sampled cores from two areas of the lake, the southern main basin and near an embayment on the southwestern end of the lake. The embayment appears to contain much of the biological activity in the lake.

By sequencing the DNA and RNA from the ice samples, the team identified thousands of bacteria, including some that are commonly found in the digestive systems of fish, crustaceans and annelid worms, in addition to fungi and two species of archaea, or single-celled organisms that tend to live in extreme environments. Other species they identified are associated with habitats of lake or ocean sediments. Psychrophiles, or organisms that live in extreme cold, were found, along with heat-loving thermophiles, which suggests the presence of hydrothermal vents deep in the lake. Rogers said the presence of marine and freshwater species supports the hypothesis that the lake once was connected to the ocean, and that the freshwater was deposited in the lake by the overriding glacier.

These results, however, are not without controversy.

Other claims and other lakes

Long before he began using these techniques to study the ice, Rogers and his team had developed a method to ensure purity. Sections of core ice were immersed in a sodium hypochlorite (bleach) solution, then rinsed three times with sterile water, removing an outer layer. Under strict sterile conditions, the remaining core ice was then melted, filtered and refrozen.

Sergey Bulat has doubts about the results, despite the careful sample preparation. Bulat, a Lake Vostok expert at the Petersburg Nuclear Physics Institute in Gatchina, Russia, is quoted as saying, “that it is very probably that the samples are heavily contaminated with tissue and microbes from the outside world.”

Bulat and Rogers have both studied Vostok ice samples taken in the 1990s by a consortium of Russian, French and US Antarctic researchers. In the past, the pair pondered a close collaboration. But their scientific relationship broke over enduring disagreement about the level of contamination of samples.

In March, Bulat himself faced criticism over an unknown species of bacterium his team had discovered in a Lake Vostok ice core drilled last year. Sceptics said that this finding was due to contamination from drilling fluid.

Eric Cravens, assistant curator at the National Ice Core Laboratory in Littleton, Colo., holds up a piece of ice taken from above Lake Vostok, a remote region of Antarctica. The ice offers a glance at hundreds of thousands of years of geologic history. (Photo credit: Melanie Conner/National Science Foundation).

The two researchers’ claims are probably the first in what will no doubt be an interesting period of discovery in Lake Vostok and other Antarctic lakes. The first samples of water from Lake Vostok itself, collected in early 2013 are currently being analysed. The Russian team has said that it hopes to have results within the next year. Bacteria, of known species, have been recovered from the smaller Antarctic Lakes, Whillans and Vida. Lake Vida has been sealed off for around 2,800 years. Ice cores drilled in 2005 and 2010 have recently revealed life, but at about one-tenth of the abundance usually found in freshwater lakes in moderate climate zones. Similarly in Lake Whillans the bacteria levels were roughly one-tenth the abundance of microbes in the oceans.

These results are glimpses into the the sub-glacial world of Antarctica. Glimpses that may change how we not only view this continent but also providing clues to how extra terrestrial life may exist on icy moons such as Jupiter’s Europa and Saturn’s Enceladus.

This image, taken by the NASA/ESA Hubble Space Telescope, shows five moons orbiting Pluto, the distant, icy dwarf planet (ESA/Hubble/AFP/Showalter)

The dwarf planet, Pluto, can still generate public interest – if the naming of its two recently discovered moons is anything to go by. After their discovery, the leader of the research team, Mark Showalter, called for a public vote to suggest names for the two objects. The contest, aptly named ‘Pluto Rocks!‘, concluded with the Vulcan as the outright favorite, after a William Shatner led push by Star Trek fans. The proposed names Cerberus and Styx ranking second and third respectively. The International Astronomical Union (IAU) has announced that the names Kerberos and Styx have officially been recognised for these fourth and fifth moons of Pluto. A decision that is probably correct, even if it proves not to be the most popular.

The moons of Pluto

The new moons were discovered in 2011 and 2012, during observations of the Pluto system made with the NASA/ESA Hubble Space Telescope. Their discovery increasing the number of known Pluto moons to five. Kerberos lies between the orbits of Nix and Hydra, two bigger moons discovered by Hubble in 2005, and Styx lies between Charon, the innermost and biggest moon, and Nix. Both have circular orbits assumed to be in the plane of the other satellites in the system. Kerberos has an estimated diameter of 13 to 34 kilometres, and Styx is thought to be irregular in shape and is 10 to 25 kilometres across.

Artist illustration of Pluto (centre) from one of its small moons. The largest moon Charon is on the right. Credit: NASA, ESA and G. Bacon (STScI)

The recent discoveries of the two small moons orbiting Pluto raise interesting new questions about how the dwarf planet formed. We now know that a total of four outer moons circle around a central “double-planet” comprising Pluto and its large, nearby moon Charon.

No home for Spock

The International Astronomical Union (IAU) is the arbiter of the naming process of celestial bodies, and is advised and supported by astronomers active in different fields. On discovery, astronomical objects receive unambiguous and official catalogue designations. When common names are assigned, the IAU rules ensure that the names work across different languages and cultures in order to support collaborative worldwide research and avoid confusion.

To be consistent with the names of the other Pluto satellites, the names had to be picked from classical European mythology, in particular with reference to the underworld — the realm where the souls of the deceased go in the afterlife. Showalter submitted Vulcan and Cerberus to the IAU where the Working Group for Planetary System Nomenclature (WGPSN) and the Committee on Small Body Nomenclature (WGSBN) discussed the names for approval.

After a final deliberation, the IAU Working Group and Committee agreed to change Cerberus to Kerberos — the Greek spelling of the word, to avoid confusion with an asteroid called 1865 Cerberus. According to mythology, Cerberus was a many-headed dog that guarded the entrance to the underworld. In keeping with the underworld theme the third most popular name was chosen — Styx, the name of the goddess who ruled over the underworld river, also called the Styx.

The IAU decided against the name Vulcan for a number of reasons: Vulcan had already been used for a hypothetical planet between Mercury and the Sun (although this planet was found not to exist), the term “vulcanoid” remains attached to any asteroid existing inside the orbit of Mercury, and finally Vulcan does not fit into the underworld mythological scheme. The Romans identified Vulcan with the Greek smith-god Hephaestus, and he became associated like his Greek counterpart with the constructive use of fire in metalworking.

In a press release the IAU has stated that it:

wholeheartedly welcomes the public’s interest in recent discoveries, and continues to stress the importance of having a unified naming procedure following certain rules, such as involving the IAU as early as possible, and making the process open and free to all. Read more about the naming of astronomical objects here. The process of possibly giving public names to exoplanets (see iau1301), and more generally to yet-to-be discovered Solar System planets and to planetary satellites, is currently under review by the new IAU Executive Committee Task Group Public Naming of Planets and Planetary Satellites.

It all began with Galileo

Naming moons is not a new controversy. No sooner had telescopes been developed – improving on viewing the heavens by eye – and naming rights were contested. Galileo Galilei discovered the four largest moons of Jupiter sometime between 1609 and 1610. Galileo initially named his discovery the Cosmica Sidera (“Cosimo’s stars”) in the hope of gaining patronage from a former student of his the Grand Duke Cosimo II of Tuscany. He changed this to the “Medician Stars”, which would honor all four brothers in the Medici clan, in his 1610 book Sidereus Nuncius (The Starry Messenger). Their present names (lovers of the god Zeus (the Greek equivalent of Jupiter)) were suggested by Johannes Kepler, to Simon Marius, who had discovered the moons independently around the same time as Galileo. Marius published the names: Io, Europa, Ganymede and Callisto, in his Mundus Jovialis, in 1614 .

Charon’s discovery as a time-varying bulge on the image of Pluto (seen near the top at left, but absent on the right). Photo credit NASA.

Despite the firm hand of the IAU, naming even in the present day is still an art as much as a science. Charon was discovered in 1978 when astronomer James Christy noticed images of Pluto were strangely elongated. The direction of elongation cycled back and forth over 6.39 days – Pluto’s rotation period. Searching through their archives of Pluto images taken years before, Christy found more cases where Pluto appeared elongated. Additional images confirmed he had discovered the first known moon of Pluto. Christy proposed the name Charon after the mythological ferryman who carried souls across the river Acheron, one of the five mythical rivers that surrounded Pluto’s underworld. Apart from the mythological connection for this name, Christy chose it because the first four letters also matched the name of his wife, Charlene.

New Horizons, Pluto and the Kuiper Belt

Pluto’s origin and identity has long puzzled astronomers. Orbiting at a distance of between 4.4–7.4 billion kilometres from the Sun it has proved difficult to investigate. Pluto’s true place in the Solar System began to reveal itself only in 1992, when astronomers began to find small icy objects beyond Neptune that were similar to Pluto not only in orbit but also in size and composition. Astronomers now believe Pluto to be the largest member of the Kuiper belt, a somewhat stable ring of objects located between 30 and 50 AU from the Sun. Though Pluto is the largest of the Kuiper belt objects discovered so far, Neptune’s moon Triton, which is slightly larger than Pluto, is similar to it both geologically and atmospherically, and is believed to be a captured Kuiper belt object

The excitement for Pluto and its moons will be heightened in 2015 when NASA’s New Horizons spacecraft makes a flyby and then continues on to study the Kuiper belt objects in 2016-2020. It has already been suggested by Showalter that some of the names from the Pluto Rocks list, as well as some from Star Trek, may well names of craters and mountains, revealed on Pluto by New Horizons – this story is far from over.

I do not think I would be alone in fearing ‘losing my mind’. Even the common expression, “are you out of your mind?” gives solid form to what may seem a merely philosophical train of thought. At any given time most people will declare confidently that “I am in my ‘right mind’ and point to themselves as that ‘I’. The quandary is the ‘I’ of age eight is different to the ‘I’ of forty-eight; despite the continuity of of ‘I’ joining these two for example. Our mind then is one of those puzzling concepts at once both familiar and ephemeral. To lose ones mind, though, even partially, through trauma, disease, or disorder we would all agree is to lose some quintessential part of us. Trouble In Mind is a collection of real stories about people who have suffered just that – losing part of their minds.

The stories are from patients that the neuropsychologist author, Jenni Ogden, has worked with over her career in New Zealand, the USA, and Australia. Ten of the 15 patients portrayed in this book featured in Ogden’s 2005 textbook Fractured Minds. Trouble in Mind is neither text, nor assessment, nor treatment book. There are other books on the market that describe patients with a variety of neurological conditions. Many written by clinicians such as Ogden. Most I find fall short because the clinician writer is excited by the condition and fails to connect the human to that condition. In other examples non-clinicians often focus complete cures, without any reference to the many that underwent similar treatments – without success.

Ogden’s stories succinctly and clearly explain the medical conditions and engagingly present the human side of each in an empathetic and nuanced style. Whether talking about patients with car-crash brain trauma, rugby-induced concussion or suffering from Parkinson’s disease Ogden covers the personal, social and family elements with clarity that is often missing in clinical based non-fiction written by clinicians. In this respect Ogden writes with feeling like that of psychologist Oliver Sacks at his best.

These are stories that will have a resonance with most in our society. Three in particular I will mention as way of illustration of the breadth covered. Michael was a 24-year old motorcycle maniac. After a horrific accident, he left the critical care unit with a virtually ignored head injury; the surgeons had grappled with keeping him alive and the extensive orthopedic surgeries and specialist care. neither he nor his doctors realised that he was cortically blind. This resolved itself after two years – leaving him with object agnosia – the inability to recognise what he was seeing. Ogden then describes he many years work with Michael, his trials, tribulations and treatments to living 24 years later is a life with a most interesting disability. Amongst this we also get Ogden’s motivation – her clinician’s ‘delight’ in being asked to work with such an unusual case. Yes her delight, her excitement; those real human emotions not hidden behind neutral, banal psychology speak.

In another chapter Ogden looks at the bizarre neuropsychological disorder of hemineglect – ignoring visual stimuli in the side of space opposite to the side of their brain that is damaged. In this case though the patient is a chirpy 50 year-old female, Janet. The chapter is fascinating and the description of janet’s sessions with Ogden are sometimes, well, hilarious. But this is real-life not Hollywood. Janet’s hemineglect is caused by a brain tumor. Janet dies, four long and difficult years following her diagnosis. Ogden doesn’t just end the chapter, she humanely discusses the impact on Janet’s husband and close family and friends of her treatment and death. She also assesses the effectiveness of the treatments, looking at other cases, from her own and others’ casebooks.

The final chapter is aptly called “The Long Goodbye: coming to terms with Alzheimer’s disease.” This chapter follows Sophie’s diagnosis and cognitive decline from Alzheimer’s disease. I learnt a lot about the disease from reading this chapter. I equally learnt how it would be to watch a person who “was once active, independent, intelligent, humorous and loving gradually lose her mind”.

This collection of stories is eminently readable. I recommend it to readers with either; a specific, perhaps personal, topic of interest or those more generally who are curious and interested in how our minds work, particularly when they go awry due to damage to that squishy grey organ inside our skull.

The colorful and polished launch of Shenzhou 10 confirms that China has come of age as a spacefaring nation. At 19:40 AEST on Tuesday June 11 (17:40 local time) three ‘yuhangyuan’, Chinese astronauts, embarked on China’s sixth crewed space mission. This second mission to Tiangong 1, the Chinese space station, is a credible step in mastering the art and engineering of space exploration. It was also a public relations success.

Shenzhou 10 crew

Announcing in early April, that Wang Yaping, a 35 year old Major in the PLA Air Force, was one of the 3-person Shenzhou 10 crew, silence then descended on the identity of the other crew members. Wang was named as the in-flight instructor. She becomes China’s second female and 9th astronaut to have flown.

Building the suspense the Chinese finally announcing the other two the names of the three person crew yesterday. Along with Wang the Shenzhou 10 crew are: Nie Haisheng (48) Commander of Shenzhou 10, a veteran of Shenzhou 6 in 2005, and a Major General in PLA Air Force, and Zhang Xiaoguang, 47 Assistant Pilot of Shenzhou 10, backup crew of Shenzhou 9 (along with Wang) and a Senior Colonel of PLA Air Force.

This places the Chinese astronaut corps as a modern, relatively, gender balanced operation. Zhang and Nie both hale from the 1996 second astronaut selection. As have all male yuhangyuan to date including Yang Liwei, China’s first astronaut. The first group of astronauts were selected in 1971 in a hopelessly ambitious and quickly abandoned attempt to put astronauts into space in the 1970s. Wang, along with Liu Yang, China’s first female yuhangyuan, comes from China’s 2010 third group of yuhangyuan. The Chinese, at least to the outside world, have not followed the more memorable and colourful NASA lead of allowing astronaut groups to pick their nick-names.

The heavenly palace

With the launch a success, Nie will now chase, rendezvous and dock with an orbital laboratory, Tiangong (a mandarin word meaning “heavenly palace”), which was launched nearly two years ago on September 29, 2011.On November 2, 2011 China successfully docked the unmanned Shenzhou 8 with Tiangong. It remained docked for 14 days and then undocked and repeated the docking maneuver – proof that the first was not a fluke. It then was undocked, leaving Tiangong to its solitary orbit 370km above the earth’s surface.

Shenzhou 10 successfully on its way.

On June 18, 2012 a second craft docked with Tiangong. This time it was the crewed Shenzhou 9. The space station was then declared operational. China had joined Russia and the USA in having the capability to become space residents. The three person crew on Tiangong conducted experiments and aclimatised to the prolonged weightlessness for their 10 day mission.

The normal pattern was for two to sleep in Tiangong and one to sleep in Shenzhou. At only 10.4m in length, Shenzhou is smaller than the 1971 Russian Salyut (13.1m) and the 1973 US Skylab (36.1m) space laboratories. Like these other first space laboratories Tiangong is designed with a limited lifespan. The current mission, Shenzhou 10, will be the last to Tiangong 1.

Shenzhou 10

As is the norm now the Shenzhou launch was covered live by the Chinese media. providing pictures, expert commentary and graphics depicting what was going on at the various stages of the launch. Shenzhou 10 is now safely in orbit and will spend the next few days approaching a suitable orbit for docking. The Shenzhou 10 will dock with the orbiting lab module Tiangong 1 several times.

The view inside the cabin during launch and the view of the Earth from the outside camera. The Chinese provided high quality visuals and coverage of this launch.

“The three astronauts will stay in orbit for 15 days, including 12 days when they will work inside the coupled complex of the Shenzhou 10 and Tiangong 1,” said Zhou Jianping, head designer of China’s manned space program. It is expected that they will set a Chinese record for time in orbit.

The interesting point is that the mission profile for Shenzhou 10 is opaque. Although it is expected that the craft will be put through it’s docking paces – not something to be dismissed lightly – the scientific and engineering goals of this mission are less obvious that the recent Shenzhou missions.

As the in-flight instructor Wang will give lectures to middle and elementary school students from orbit. This could be seen as mimicry of recent astronauts on the International Space Station; the everyday science experiments of Don Pettit and most recently Chris Hadfield. These were experiments designed by US High School students and carried out in space by the astronauts. What Wang will demonstrate and whether it will be released to the West are at present questions – without answers.

This will be the last Chinese human space mission for quite some time. The next Shenzhou missions are expected to fly to the Tiangong 2 laboratory. This will be an expanded version of Tiangong 1, similar in design to the Russian 1986 Mir space station. It is expected to be able to sustain 20-day visits. It will probably not be launched until around 2015 or possibly later. The gap between the flight of Shenzhou 10 and Shenzhou 11 could ultimately prove to be the longest hiatus in Chinese human spaceflight to date.

Regional implications

How will the Chinese promote this current mission once it is completed? Its success, or otherwise, will not necessarily aid any military space activities, nor directly any Chinese commercial space activities. It does provide a compelling message, I suggest, to its regional competitors. Human exploration is possibly the most expensive and prestigious space activity. I think we will find China promoting this expedition to its fullest. It has large domestic appeal: pride in national achievement as well as motivating and driving high technology industry development. From the perspective of an “Asian Space race” China has an audacious program of robotic missions to the Moon over the next few years. Fully intending to continue its long march to put humans onto the Moon and Mars in the next few decades.

Science and the space station

Understanding the response of the human body to low and microgravity is critical for space exploration. Astronauts undergoing long periods of weightlessness – such as in flights to and from Mars – will need to understand the impact of this on their ability to carry out tasks, both routine and emergency.

The space station provides an ideal environment to study many aspects of humans in space, including: balance, digestion, muscle and bone retention and heart behaviour.

It also provides a unique window on the earth and sun – one in which scientists can use their understanding to respond to opportunities as they arise as well as conduct scheduled experiments and observations.

As a solar observatory, the space station is clear of Earth’s atmosphere, giving a unique perspective on terrestrial weather and atmospheric science.

What happens if you wring out a wet cloth in microgravity?

The four laboratory sections house experiments selected on their scientific merit or educational and industrial interest. These include understanding how microgravity effects animal and plant growth, and understanding and developing novel industrial processes.

The AMS has been in operation and collecting data since June 27, 2011 and has an expected operational lifetime in excess of ten years.

The International Space Station with the Endeavour Space Shuttle docked, along with descriptions of each section of the station. NASA

A tour of the International Space Station

Approaching the the Rassvet dock in your Soyuz spacecraft, you realise just how big and frail the space station is.

Its most visible features are the eight solar arrays. They generate 84 kilowatts of power and have a wingspan of 73 metres, wider than a Boeing 777. They, along with the array of habitable modules, are supported by a central truss.

An hour-long tour of the ISS.

As you dock you can see nearby the Russian crew’s Soyuz craft docked at Poisk. At the other end, on the Harmony node, is a newly captured SpaceXDragon supply ship.

Through the cramped airlock you enter the Zarya module. After taking a moment to orient yourself in this weightless environment, you proceed down a circular tunnel to the Zvezda service module.

This is the space station control and services centre, containing the Russian guidance and navigation computers.

It’s also the sleeping and hygiene quarters for two of the cosmonauts. In an emergency it can support all six of the crew.

Back through Zarya you’ll come to the US-built Unity node, a galley where it is possible for all the crew to gather and eat together.

Mealtime in the Unity node. NASA

Just off from this is the Italian-built Tranquility node, which is multi-purpose – with storage, berthing and habitation facilities. It houses the ESA-built observatory, the Cupola.

With its six side windows and a top window the Cupola gives observers a Millenium Falcon-type view of Earth below.

You wind your way back to Unity then through the truss structure (that supports the solar arrays and the Canadarm2) to the Destiny Laboratory – the primary US research facility. Continuing on from here, you reach Harmony.

You note the Destiny and Harmony nodes are square, rather than round like the older modules. This gives four usable working “walls” – there is no up or down, so no floor or ceiling.

Besides velcroing objects to every available “wall” space, the next noticeable thing is the total absence of chairs.

Harmony is home to four crew. The sleeping berths radiate into each “wall”. Each is about the size of a phone booth and have a sleeping bag-type arrangement as well as computer and space for personal effects.

Sleeping on the ISS is a novel experience. The station orbits Earth every 90 minutes, which means there is a sunrise and sunset every hour and a half.

The Harmony node also houses sanitary (yes, that is your toothbrush and toothpaste velcroed to the wall) and exercise facilities. A treadmill, gym and seatless exercise bike are part of the necessary exercise regime to ensure muscle does not waste away in the microgravity environment of the space station.

And … that’s it! This is your world for the next six months, all 388 cubic metres of it – about half the interior space of a Jumbo jet.

The international space residents

The first expedition of William Shepherd (US), Yuri Gidzenko (Russia) and Sergei Krikalev (Russia) was launched on a Russian Soyuz on October 31, 2000 and returned on the space shuttle Discovery on March 21, 2001.

Expedition #35 crew. NASA.

At the moment, the ISS is hosting a six-person expedition, #35. Current commander Chris Hadfield (Canada) and flight engineers Tom Marshburn (US) and Roman Romanenko (Russia) docked on December 21, 2012.

Robonaut 2, or R2, the first humanoid robot to travel to space and the first US-built robot to visit the space station, performs a few finger motion and sensor checkouts aboard the ISS. NASA

Expedition #35 is an all-male crew, but 31 women have flown to the space station – including Expedition 16 commander Peggy Whitson, the station’s first female commander. In all, there have been nationals from 15 countries, including seven tourists.

International co-operation

a laboratory in space, for the conduct of science and applications and the development of new technologies

a permanent observatory in high-inclination orbit, from which to observe Earth, the Solar System and the rest of the Universe

a transportation node where payloads and vehicles are stationed, assembled, processed and deployed to their destination

a servicing capability from which payloads and vehicles are maintained, repaired, replenished and refurbished

an assembly capability from which large space structures and systems are assembled and verified

a research and technology capability in space, where the unique space environment enhances commercial opportunities and encourages commercial investment in space

a storage depot for consumables, payloads and spares

a staging base for possible future missions, such as a permanent lunar base, a human mission to Mars, robotic planetary probes, a human mission to survey the asteroids, and a scientific and communications facility in geosynchronous orbit

In the 2010 US National Space Policy, the ISS was given additional roles of serving commercial, diplomatic and educational purposes.

The ISS has acted as an example and vehicle for international co-operation, but the US has vetoed China’s participation. China, as a result, is now pursuing its own space laboratory program.

The first of these, Tiangong-1, is in orbit and has docked with Shenzou-9, but is still to be inhabited.

Where to?

The US Administration will fund the ISS until 2020. With continued interest from the international community, the space station should continue as a vehicle for fruitful science and demonstration of international co-operation for at least this decade.

The development of commercial spacecraft also provides a second string to the station’s future. SpaceX has demonstrated its capability to deliver cargo and possibly crew to supplement the ageing Russian Soyuz capability.

Only time will tell whether the US allows the addition of China and India, Asia’s space-capable nations, to the ISS fraternity.

Kevin Orrman-Rossiter does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

Laughing rats, name-calling wild parrots, archer-fish with a sense of humour, and educated ants; the naturalist Charles Darwin would have loved this book. The philosopher Rene Descartes would equally have found it deeply troubling. Both with good reason.

In Descartes’ dualist philosophy the mind and body are two separate entities. There is the material body and the immaterial mind or soul. The latter linking humans to the mind of God, making us, in his philosophy, different to animals. Descartes famously reasoned animals are composed only of material substances and therefore have no capacity to reason. More importantly for how we see animals, Descartes wrote that a human person, such as you or I, is something distinct from that person’s body. Therefore an animal, being material only, could in this way of thinking, never have a mind – never have a concept of “I”.

This stance was extended by the behaviorist paradigms of the mid 20th century associated with the psychologist B F Skinner.

Darwin on the other hand thought differently. He was a natural philosopher who got up out of his armchair and voyaged the world, most notably aboard the Beagle. Darwin attributed emotions to many animals and even argued that earthworms are cognitive beings. In his classic The Descent of Man he argued, most persuasively, that we and the other animals differ in our mental powers by degree, not in kind.

Today the discussion is no different, researchers still debate not only advanced claims of intelligence in animals but also how to test whether their abilities reflect human-like cognition.

This brings me to what I liked so much about this book.

An archer-fish demonstrating its uncanny aim. Photo credit BBC.

Each chapter focuses on an animal in a particular observational or experimental setting. Virginia Morell introduces us to the scientist and the animals, explaining the studies, the results and some of the trials and triumphs along the way to building an understanding of what the scientists find. The animal and settings we may already have a prejudice about; captive dolphins, elephant memories, chimpanzees and language, dogs and humans, are very carefully presented to ensure that the most compelling results are well presented. The more novel animals, ants and fish for example, are also carefully presented, their novelty makes for an easier presentation. For example I had no preconceived ideas regarding the ability of ants to teach – with no mental hurdle of my to overcome – that chapter was very illuminating. The examples and researchers chosen for these chapters succinctly illustrate what we have learnt about the emotions and intelligence of these animals.

Yes I did say chosen. It does not pretend, nor claim to be, encyclopaedic, academic nor ‘balanced’ presentation of the entire field. This is a lively, non-fiction tour of the cutting edge of animal cognitive science. Virginia Morell translates the scientific jargon of the field into words that all can engage with.

Each chapter is a separate story, reflecting that some of the chapters were adapted from previously published articles from 2008 to 2012. These are neatly book-ended with chapter that frame these quite succinctly. This I think is a strength of the book. Each chapter, each story, is self-contained that you can read it, look at the references and ponder what the researchers and Virginia are conveying to you. Not only do you get an appreciation of the scientific significance of the various studies – you get that rare glimpse into the scientific process and personality that is often missed in science communication writing.

For example, consider the archer-fish and neuroscientist Stefan Schuster. I learnt that Stefan has spent more than forty years investigating how fish think and make decisions. I learnt that the idea of seeing life from the mind of a fish was something that grabbed him as a child. Stefan’s story is more than just his careful experimentation on fish behaviour. Along the way he has made key discoveries about the sophisticated mental abilities of the archer-fish. The archer-fish is well-named for it is the sharpshooter of the piscine world.

In the chapter discussing his work I learnt that Schuster owes his success to curiosity, fun and serendipity – as well as careful experimentation. Schuster and his students had discovered that archer-fish learnt how to shoot at difficult and novel targets by watching another skilled fish perform the task. That means they had taken the viewpoint of the other fish. Did they copy or imitate? Let the philosophers debate the definitions. What the archerfish do involves cognition. Although we don’t understand the relationship between cognition and sentience, scientists know that one informs the other.

Each chapter is replete with great stories, good science and probing philosophy. Morell displays her ability to write engagingly for a general audience, while presenting the science at a suitably intriguing level. If you view animals the same after reading this book – then give it a second read – it will be worth it.

Man had always assumed that he was more intelligent than dolphins because he had achieved so much – the wheel, New York, wars and so on – while all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man – for precisely the same reasons.

An artist’s concept depicts a dense, dead star called a white dwarf crossing in front of a small, red star. The white dwarf’s gravity is so great that it bends and magnifies light from the red star, causing it to appear bigger than it really is. (Credit: NASA/JPL-Caltech)

NASA’s Kepler space telescope has witnessed the effects of a dead star bending the light of its companion star. The findings are among the first detections of this phenomenon — a prediction of Einstein’s general theory of relativity — in binary, or double, star systems.

The dead star, called a white dwarf, is the burnt-out core of what used to be a star like our sun. It is locked in an orbiting dance with its partner, a small “red dwarf” star. While the tiny white dwarf is physically smaller than the red dwarf, it is more massive.

This white dwarf is about the size of Earth but has half the mass of the sun. It’s so hefty that the red dwarf, of roughly the same mass though 50 times larger in diameter (about half our sun’s diameter), is circling around the white dwarf. These findings are to be published April 20, in the Astrophysical Journal.

Kepler’s role

The Kepler space telescope’s primary job is to scan stars in search of orbiting planets. As the planets pass by, they block the starlight by miniscule amounts, which Kepler’s sensitive detectors can see.

“The technique is equivalent to spotting a flea on a light bulb 3,000 miles away, roughly the distance from Los Angeles to New York City,” said Avi Shporer, co-author of the study.

Muirhead and his colleagues regularly use public Kepler data to search for and confirm planets around smaller stars, the red dwarfs, also known as M dwarfs. These stars are cooler and redder than our yellow sun. When the team first looked at the Kepler data for a target called KOI-256 (Kepler Object of Interest), they thought they were looking at a huge gas giant planet eclipsing the red dwarf.

“We saw what appeared to be huge dips in the light from the star, and suspected it was from a giant planet, roughly the size of Jupiter, passing in front,” said Muirhead.

This chart shows data from NASA’s Kepler space telescope. The plot on the left shows data collected by Kepler for a star called KOI-256, which is a small red dwarf. At first, astronomers thought the dip in starlight was due to a large planet passing in front of the star. But certain clues, such as the sharpness of the dip, indicated it was actually a white dwarf. In fact, in the data shown at left, the white dwarf is passing behind the red dwarf, an event referred to as a secondary eclipse. The change in brightness is a result of the total light of the system dropping.The plot on the right shows what happens when the white dwarf passes in front of, or transits, the star. The dip in brightness is incredibly subtle because the white dwarf, while just over half as massive as our sun, is only the size of Earth, much smaller than the red dwarf star. The blue line shows what would be expected given the size of the white dwarf. The red line reveals what was actually observed: the mass of the white dwarf is so great, that its gravity bent and magnified the light of the red star. Because the star’s light was magnified, the transiting white dwarf blocked an even smaller fraction of the total starlight than it would have without the distortion. This effect, called gravitational lensing, allowed the researchers to precisely measure the mass of the white dwarf. (Image credit: NASA/Ames/JPL-Caltech)

To learn more about the star system, Muirhead and his colleagues turned to the Hale Telescope at Palomar Observatory near San Diego. Using a technique called radial velocity, they discovered that the red dwarf was wobbling around like a spinning top. The wobble was far too big to be caused by the tug of a planet. That is when they knew they were looking at a massive white dwarf passing behind the red dwarf, rather than a gas giant passing in front.

The team also incorporated ultraviolet measurements of KOI-256 taken by the Galaxy Evolution Explorer (GALEX), a NASA space telescope now operated by the California Institute of Technology in Pasadena. The GALEX observations, led by Cornell University, Ithaca, N.Y., are part of an ongoing program to measure ultraviolet activity in all the stars in Kepler field of view, an indicator of potential habitability for planets in the systems. These data revealed the red dwarf is very active, consistent with being “spun-up” by the orbit of the more massive white dwarf.

The astronomers then went back to the Kepler data and were surprised by what they saw. When the white dwarf passed in front of its star, its gravity caused the starlight to bend and brighten by measurable effects.

“Only Kepler could detect this tiny, tiny effect,” said Doug Hudgins, the Kepler program scientist at NASA Headquarters, Washington. “But with this detection, we are witnessing Einstein’s general theory of relativity at play in a far-flung star system.”

One of the predictions of Einstein’s general theory of relativity is that gravity bends light. Astronomers regularly observe this phenomenon, often called gravitational lensing, in our galaxy and beyond. For example, the light from a distant galaxy can be bent and magnified by matter in front of it. This reveals new information about dark matter and dark energy, two mysterious ingredients in our universe.

Gravitational lensing has also been used to discover new planets and hunt for free-floating planets.

In the new Kepler study, scientists used the gravitational lensing to determine the mass of the white dwarf. By combining this information with all the data they acquired, the scientists were also able to measure accurately the mass of the red dwarf and the physical sizes of both stars. Kepler’s data and Einstein’s theory of relativity have together led to a better understanding of how binary stars evolve.

In short, the AMS results have shown an excess of antimatter particles within a certain energy range. The measurements represent 18 months of data from the US$1.5 billion instrument.

The AMS experiment is a collaboration of 56 institutions, across 16 countries, run by the European Organisation for Nuclear Research (CERN). The AMS is a giant magnet and cosmic-ray detector complex fixed to the outside of the International Space Station (ISS).

Dark matter matters

The visible matter in the universe, such as you, me, the stars and planets, adds up to less than 5% of the universe. The other 95% is dark, either dark matter or dark energy. Dark matter can be observed indirectly through its interaction with visible matter but has yet to be directly detected.

Cosmic rays are charged high-energy particles that permeate space. The AMS is designed to study them before they have a chance to interact with Earth’s atmosphere.

The AMS explained.

An excess of antimatter within the cosmic rays has been observed in two recent experiments – and these were labelled as “tantalising hints” of dark-matter decay.

One possibility for the excess antimatter, predicted by a theory known as supersymmetry, is that positrons (antimatter electrons) could be produced when two particles of dark matter collide and annihilate.

But the AMS measurement cannot yet entirely rule out the alternative explanation that the positrons originate from pulsars (rotating neutron stars) distributed around the galactic plane.

Supersymmetry theories also predict a cut-off at higher energies above the mass range of dark matter particles, and this has not yet been observed. Over the coming years, the AMS will further refine the measurement’s precision, and clarify the behaviour of the positron fraction at higher energies.

Ting’s thing

The AMS idea dates back to 1994. At that time NASA was desperate to develop a “sexy” science project that would endear the US scientific community to the ISS.

The AMS hitches a ride to the ISS in 2011. NASA

Enter Ting and his idea of a space-borne magnet that would sift matter and antimatter.

By 1995 Ting had NASA’s agreement. The US Department of Energy would fund the detector, and NASA would provide space on the ISS and a shuttle to fly it there. Ting would obtain the foreign involvement needed to build the instrument.

By 2008 the detector was complete but the shuttle flight schedule was a shambles. With delays after the 2003 Columbia disaster and the probable 2010 retirement of the shuttles, no flights were available to deliver the device to the space station.

Ting persisted and, through lobbying, got Congress to authorise one more shuttle flight, STS-134 – the second last ever.

On May 16, 2011 the final flight of NASA’s youngest shuttle, Endeavour, the spectrophotometer was launched. It has been in operation and collecting data since June 27, 2011 and has an expected operational lifetime in excess of 10 years.

Antimatter matters

There is a second scientific aim of the AMS. Experimental evidence indicates that our galaxy is made of matter. But the Big Bang theory assumes equal amounts of matter and antimatter were present at the origin of the universe.

So what happened to all the antimatter? The observation of just one antihelium nucleus would provide evidence for the existence of a large amount of antimatter somewhere in the universe.

With a sensitivity three orders of magnitude better than previous experiments, the Alpha Magnetic Spectrophotometer will be searching for the existence of this primordial antimatter.

Amazingly we have come to recognise that we know little about what makes up 95% of our universe.

Today’s AMS results mark a precision start to an audacious experiment to redress that ignorance.

Kevin Orrman-Rossiter does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

This artist’s conception illustrates the brown dwarf named 2MASSJ22282889-431026. NASA’s Hubble and Spitzer space telescopes observed the object to learn more about its turbulent atmosphere. Image credit: NASA/JPL-Caltech

Long distance weather reports are now a commonality. The report for 2MASSJ22282889-431026 is somewhat unusual. It forecasts wind-driven, planet-sized clouds, with the light varying in time, brightening and dimming about every 90 minutes. The clouds on 2MASSJ22282889-431026 are composed of hot grains of sand, liquid drops of iron, and other exotic compounds. Definitely not the first place to spend a summer holiday.

Not that 2MASSJ22282889-431026 (or 2M2228 as it is known in The Astrophysical Journal Letters) will appear on a travel itinerary anytime soon. For 2M2228 is a brown dwarf, 39.1 light years from earth. Brown dwarves form out of condensing gas, as stars do, but lack the mass to fuse hydrogen atoms and produce energy. Instead, these objects, which some call failed stars, are more similar to gas planets, such as Jupiter and Saturn, with their complex, varied atmospheres. Although brown dwarves are cool relative to other stars, they are actually hot by earthly standards. This particular object is about 600 to 700 degrees Celsius.

The atmosphere of 2M2228

Astronomers using NASA’s Spitzer and Hubble space telescopes have probed the stormy atmosphere of this brown dwarf, creating the most detailed “weather map” yet for this class of cool, star-like orbs. “With Hubble and Spitzer, we were able to look at different atmospheric layers of a brown dwarf, similar to the way doctors use medical imaging techniques to study the different tissues in your body,” said Daniel Apai, the principal investigator of the research at the University of Arizona in Tucson.

But more surprising, the team also found the timing of this change in brightness depended on whether they looked using different wavelengths of infrared light.

This artist’s illustration shows the atmosphere of the brown dwarf 2M2228. The results were unexpected, revealing offset layers of material as indicated in the diagram. For example, the large, bright patch in the outer layer has shifted to the right in the inner layer. The observations were made using different wavelength of light: Hubble sees infrared light from deeper in the object, while Spitzer sees longer-wavelength infrared light from the outermost surface. Both telescopes watched the brown dwarf as it rotated every 1.4 hours, changing in brightness as brighter or darker patches turned into the visible hemisphere. At each observed wavelength, the timing of the changes in brightness was offset, or out of phase, indicating the shifting layers of material. Image credit: NASA/JPL-Caltech

These variations are the result of different layers or patches of material swirling around the brown dwarf in windy storms as large as Earth itself. Spitzer and Hubble see different atmospheric layers because certain infrared wavelengths are blocked by vapors of water and methane high up, while other infrared wavelengths emerge from much deeper layers.

The new research is a stepping-stone toward a better understanding not only of brown dwarves, but also of the atmospheres of planets beyond our solar system.

The Spitzer Space Telescope consists of a 0.85-meter diameter telescope and three cryogenically-cooled science instruments which perform imaging and spectroscopy in the 3 – 180 micron wavelength range. Since infrared is primarily heat radiation, detectors are most sensitive to infrared light when they are kept extremely cold. Using the latest in large-format detector arrays, Spitzer is able to make observations that are more sensitive than any previous mission. Spitzer’s mission lifetime requirement was 2.5 years, then extended this to 5-years. Spitzer .

Launched on August 25, 2003 Spitzer is now more than 9 years into its mission, and orbits around the sun more than 100-million kilometers behind Earth. It has heated up just a bit – its instruments have warmed up from -271 Celsius to -242 Celsius. This is still way colder than a chunk of ice at 0 Celsius. More importantly, it is still cold enough for some of Spitzer’s infrared detectors to keep on probing the cosmos for at least two more years; the project funding has been extended to 2016.

Spitzer seen against the infrared sky. The band of light is the glowing dust emission from the Milky Way galaxy seen at 100 microns (as seen by the IRAS/COBE missions). Image credit NASA/JPL

Spitzer is the largest infrared telescope ever launched into space. Its highly sensitive instruments allow scientists to peer into cosmic regions that are hidden from optical telescopes, including dusty stellar nurseries, the centres of galaxies, and newly forming planetary systems. Spitzer’s infrared eyes also allows astronomers see cooler objects in space, like brown dwarves, extrasolar planets, giant molecular clouds, and organic molecules that may hold the secret to life on other planets.

Instead of orbiting Earth itself, the observatory trails behind Earth as it orbits the Sun and drifts away from us at about 1/10th of one astronomical unit per year.

This innovative orbit lets nature cool the telescope, allowing the observatory to operate for around 5.5 years using 360 litres of liquid helium coolant. In comparison, Spitzer’s predecessor, the Infrared Astronomical Satellite, used 520 litres of cryogen in only 10 months.

This unique orbital trajectory also keeps the observatory away from much of Earth’s heat, which can reach 250 Kelvin (-23 Celsius) for satellites and spacecraft in more conventional near-Earth orbits.

More scientific duets: the asteroid belt of Vega

Like a gracefully aging rock star Spitzer is reveling in duets. It has also teamed up with the European Space Agency‘s Herschel Space Observatory. Using data from both astronomers have discovered what appears to be a large asteroid belts around the star Vega, the second brightest star in northern night skies.

The data are consistent with the star having an inner, warm belt and outer, cool belt separated by a gap. The discovery of this asteroid belt-like band of debris around Vega makes the star similar to another observed star called Fomalhaut. Again this formation is similar to the asteroid and Kuiper belts in our own solar system.

Astronomers have discovered what appears to be a large asteroid belt around the bright star Vega, as illustrated here at left in brown. The ring of warm, rocky debris was detected using NASA’s Spitzer Space Telescope, and the European Space Agency’s Herschel Space Observatory. In this diagram, the Vega system, which was already known to have a cooler outer belt of comets (orange), is compared to our solar system with its asteroid and Kuiper belts. The relative size of our solar system compared to Vega is illustrated by the small drawing in the middle. On the right, our solar system is scaled up four times. The comparison illustrates that both systems have inner and outer belts with similar proportions. The gap between the inner and outer debris belts in both systems works out to a ratio of about 1-to-10, with the outer belt 10 times farther away from its host star than the inner belt. Astronomers think that the gap in the Vega system may be filled with planets, as is the case in our solar system. Image credit: NASA/JPL-Caltech

What is maintaining the gap between the warm and cool belts around Vega and Fomalhaut? The results strongly suggest the answer is multiple planets. Our solar system’s asteroid belt, which lies between Mars and Jupiter, is maintained by the gravity of the terrestrial planets and the giant planets, and the outer Kuiper belt is sculpted by the giant planets.

“Our findings (accepted for publication in the Astrophysical Journal) echo recent results showing multiple-planet systems are common beyond our sun,” said Kate Su, an astronomer at the Steward Observatory at the University of Arizona, Tucson.

Vega and Fomalhaut are similar in other ways. Both are about twice the mass of our sun and burn a hotter, bluer color in visible light. Both stars are relatively nearby, at about 25 light-years away. Fomalhaut is thought to be around 400 million years old, but Vega could be closer to its 600 millionth birthday. For comparison our sun is 4,600 million years old. Fomalhaut has a single candidate planet orbiting it, Fomalhaut b, which orbits at the inner edge of its cometary belt.

The Herschel and Spitzer telescopes detected infrared light emitted by warm and cold dust in discrete bands around Vega and Fomalhaut, discovering the new asteroid belt around Vega and confirming the existence of the other belts around both stars. Comets and the collisions of rocky chunks replenish the dust in these bands. The inner belts in these systems cannot be seen in visible light because the glare of their stars outshines them.

It would seem that Spitzer has quite a bit more productive and novel scientific life, including duets, left in it yet.

The Probes mission is part of NASA’s Living With a Star geospace program to explore the fundamental processes that operate throughout the solar system, in particular those that generate hazardous space weather effects near Earth and phenomena that could affect solar system exploration.

In what could perhaps be described as serendipitous, scientists had switched on key instruments on the the twin probes (which are described in detail in the second video below) just three days after launch from Cape Canaveral Air Force Station in Florida on August 30 last year.

Within days of launch, the Van Allen Probes kicked a major goal.

That decision was made in order that observations would overlap with those of another mission, the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) – launched in 1992 – that was about to de-orbit and re-enter Earth’s atmosphere.

In practice it meant NASA’s mission scientists gathered data on the third radiation belt for four weeks before a shock-wave from the sun annihilated it.

Radiation belts

These belts are critical regions of space for modern society. They are affected by solar storms and space weather and can change dramatically. They can pose dangers to communication and GPS satellites as well as humans in space – as I’ll discuss shortly.

Named after their discoverer, the late pioneering NASA astrophysicist James Van Allen, these concentric, donut shaped rings are filled with high-energy particles that gyrate, bounce, and drift through a region extending to 65,000 kilometres from the earth’s surface.

This belt is comprised mostly of high-energy protons, trapped within about 600-6,000 kilometres of Earth’s surface. Those particles are particularly damaging to satellites and humans in space. The International Space Station (ISS) orbits below this belt at 330-410 kilometres.

The second radiation belt was also discovered in 1958 using instruments designed and built by James Van Allen that were launched on Pioneer 3 and Explorer IV.

This larger belt is located 10,000 to 65,000 kilometres above Earth’s surface, and is at its most intense between 14,500 and 19,000 kilometres above Earth. The second belt is much more variable than the inner one. In addition to protons, it contains ions of oxygen and helium.

So what of the third belt? This new, outer zone is comprised mainly of high-energy electrons and very energetic positive ions (mostly protons). As reported today in the journal Science, this torus formed on September 2 last year and persisted unchanged in a height range of 20,000-23,000 kilometres for four weeks. It was then disrupted by a shock-wave from the sun.

Space weather impacts on Earth

The radiation belts are part of a much larger space weather system driven by energy and material that erupt off the sun’s surface and fill the entire solar system. Besides emitting a continuous stream of plasma called the solar wind, the sun periodically releases billions of tons of matter in what are called coronal mass ejections.

These immense clouds of material, when directed towards Earth, can cause large magnetic storms in the space environment around Earth, the magnetosphere and the upper atmosphere.

The term space weather generally refers to conditions on the sun, in the solar wind, and within Earth’s magnetosphere, ionosphere and thermosphere that can influence the performance and reliability of space-borne and ground-based technological systems and can endanger human life or health.

Most spacecraft in Earth orbit operate partly or entirely within the radiation belts. During periods of intense space weather, the density of particles within the belts increases, making it more likely that a shuttle’s sensitive electronics will be hit by a charged particle.

Ions striking satellites can overwhelm sensors, damage solar cells, and degrade wiring and other equipment. When conditions get especially rough in the radiation belts, satellites often switch to a safe mode to protect their systems.

When high-energy particles – those moving with enough energy to knock electrons out of atoms – collide with human tissue, they alter the chemical bonds between the molecules that make up the tissue’s cells.

Sometimes the damage is too great for a cell to repair and it no longer functions properly. Damage to DNA within cells may even lead to cancer – causing mutations.

During geomagnetic storms, the increased density and energy of particles trapped in the radiation belts means a greater chance that an astronaut will be hit by a damaging particle.

This picture depicts the long-range spin-spin interaction (blue wavy lines) in which the spin-sensitive detector on Earth’s surface interacts with geoelectrons (red dots) deep in Earth’s mantle. The arrows on the geoelectrons indicate their spin orientations, opposite that of Earth’s magnetic field lines (white arcs). (Credit: Illustration: Marc Airhart (University of Texas at Austin) and Steve Jacobsen (Northwestern University).)

Being responsible for picking the week’s most interesting science stories is a fun and fascinating challenge. It pushes to me to look beyond my own interests and explore what others find compelling. So I trust you find my ‘science making news’ selection of interest and delight; explore the quantum, human, off-world and mathematical highs of the week.

On the human scale an international team of scientists has been investigating the antibiotic properties of sweat. More precisely they discovered how a natural antibiotic called dermcidin, produced by our skin when we sweat, is a highly efficient tool to fight tuberculosis germs and other dangerous bugs.

Their results could contribute to the development of new antibiotics that control multi-resistant bacteria.

The benefits of a good nights sleep once again are news. Researchers have shown that the disruption in the body’s circadian rhythm can lead not only to obesity, but can also increase the risk of diabetes and heart disease.

Our study confirms that it is not only what you eat and how much you eat that is important for a healthy lifestyle, but when you eat is also very important.

At the quantum scale, the particle physicists are at it again. Not content with discovering the Higgs Boson they are shedding light (pardon the pun) on a possible 5th force in nature. In a breakthrough physicists have established new limits on what scientists call “long-range spin-spin interactions” between atomic particles. These interactions have been proposed by theoretical physicists but have not yet been seen. If a long-range spin-spin force is found, it not only would revolutionize particle physics but might eventually provide geophysicists with a new tool that would allow them to directly study the spin-polarized electrons within Earth.

The most rewarding and surprising thing about this project was realizing that particle physics could actually be used to study the deep Earth.

The latest news from Mars is that curiosity has relayed new images that confirm it has successfully obtained the first sample ever collected from the interior of a rock on another planet.

Many of us have been working toward this day for years. Getting final confirmation of successful drilling is incredibly gratifying. For the sampling team, this is the equivalent of the landing team going crazy after the successful touchdown.

To wrap up with one further piece of geek excitement. On January 25th at 23:30:26 UTC, the largest known prime number, 257,885,161-1, was discovered on Great Internet Mersenne Prime Search (GIMPS) volunteer Curtis Cooper’s computer. The new prime number, 2 multiplied by itself 57,885,161 times, less one, has 17,425,170 digits. With 360,000 CPUs peaking at 150 trillion calculations per second, 17th-year GIMPS is the longest continuously-running global “grassroots supercomputing”project in Internet history.

Until next week’s Australian Science review, go geekily crazy and enjoy your weekend.