Science

10/26/2015

The outside of the Wendelstein 7-x stellarator with its conglomeration of equipment, ports, and supporting structure (Click Image To Enlarge)

In a large complex located at Greifswald in the north-east corner of Germany, sits a new and unusual nuclear fusion reactor awaiting a few final tests before being powered-up for the very first time. Dubbed the Wendelstein 7-x fusion stellarator, it has been more than 15 years in the making and is claimed to be so magnetically efficient that it will be able to continuously contain super-hot plasma in its enormous magnetic field for more than 30 minutes at a time. If successful, this new reactor may help realize the long-held goal of continuous operation essential for the success of nuclear fusion power generation.

Created by the Max Planck Institute for Plasma Physics (IPP) and designed with the aid of a supercomputer, the Wendelstein 7-x is the first large-scale optimized stellarator of its type ever to be commissioned. With a name like something out of Hitchhiker's Guide to the Galaxy and a containment vessel that literally provides a new twist on the doughnut shape we see in standard tokamak fusion reactors, the quirky stellarator design aims to provide an inherently more stable environment for plasma and a more promising route for nuclear fusion research in general.

Initially an American design conceived by Lyman Spitzer working at Princeton University in 1951, the stellarator was deemed too complex for the constraints of materials available in the middle of the 20th Century, and the more easily constructed toroid of the tokamak won out as the standard model for fusion research.

Though some stellarators have been constructed over the course of time – notably the predecessor to this latest iteration known as the Wendelstein 7-AS (Advanced Stellarator) – the calculations required to ensure ultimate plasma containment and control have only become possible with the advent of supercomputers.

As such, algorithms specifically created to fuse theory and practice have now been applied to the design of the Wendelstein 7-x, and its designers firmly believe that this latest version will have the stability required to be the precursor machine to full-blown, continuous nuclear fusion power generation.

For the eventual success of nuclear fusion power (essentially where two isotopes of hydrogen, deuterium and tritium, are subject to such energy that the strong nuclear force is overcome and they fuse to form helium and release copious amounts of neutron energy), stability is essential. This is because the enormous pressures and temperatures (around 100 million degrees Celsius (180 million °F)) used to create the plasma, and then accelerate the resulting ion and electron soup around the containment vessel, means that any instability in the magnetic containment field or the pressure vessel itself will result in degradation and ultimately the failure of the process.

What is the concept underlying the Wendelstein 7-X fusion device? This video, produced from various CADs, illustrates how the device is configured and what objectives are being pursued by the fusion research conducted at the Greifswald branch of Max Planck Institute for Plasma Physics with Wendelstein 7-X.

To achieve a more stable environment, the stellarator eschews the method of inducing current through the plasma to drive electrons and ions around the inside of the vessel as found in tokamak designs, instead relying entirely on external magnetic fields to move the particles along. In this way, stellarator designs are basically immune to the sudden and unexpected disruptions of plasma and the enormous – and often destructive – magnetic field collapses that sometimes occur in tokamaks.

As such, a stellarator reactor is able to hold the plasma in a containment field that twists through a set of magnetic coils to continuously hold the plasma away from the walls of the device. This is because, in a normal tokamak, with its doughnut-shaped containment vessel and electromagnet windings that loop through the center of the toroid and around the outside, the magnetic field is stronger in the center than it is on the outer side. This means that plasma contained in a tokamak tends to drift to the outer walls where it then collapses.

A graphic depicting the plasma flow (red) in the stellarator and its magnetic coils (blue) (Click Image To Enlarge)

The stellarator, on the other hand, avoids this situation by twisting the entire containment vessel into a shape that constantly forces the plasma stream into the center of the reactor vessel as it continuously encounters magnetic fields in opposing positions along its entire length.

The advantages of the stellarator over the tokamak come at a cost, however, as the many twists and turns that give the stellarator an advantage in magnetic containment also means that many particles can simply be lost as they veer off course following the path of the containment vessel itself. To help avoid this, a great many more magnetic coils are required for the stellarator and must be set up at very close intervals around the structure and super-cooled with liquid helium for maximum efficiency.

Construction of the Wendelstein 7-x stellerator took over 1 million man-hours (Click Image To Enlarge)

In the case of the Wendelstein 7-x, the weight of the 50, 3.5-meter (11.5-ft) tall non-planar super-conducting electromagnets alone is around 425 tonnes (468 tons) and their placement makes construction difficult and their assembly fraught with problems. Not to mention the fact that piping around vast quantities of liquid helium to ensure that the electromagnets superconduct at temperatures close to absolute zero makes the Wendelstein 7-x a plumber's nightmare, and a tricky addition to an already difficult balancing act.

As such, the physical design of the stellarator itself requires access ports for fuel ingress and egress, along with a myriad other entry points for instruments, sensors, and all the other necessary paraphernalia necessary to monitor the enormous pressures, voltages, and temperatures that it will be subject to in operation.

Dr. Matthias Otte, who is responsible for the measurement process, reports:

“Once the flux surface diagnostics were placed in operation, we were immediately able to see the first magnetic surfaces. Our images clearly show how magnetic field lines create closed surfaces in many toroidal circulations”.

The flux surface diagnostics enables the structure of the field to be precisely measured. For this purpose, a thin electron beam is injected and moves along a field line in circular tracks through the evacuated plasma vessel. It leaves behind a tracer, which is created by collision of the electrons with residual gas in the vessel. If, in addition, a fluorescent rod is moved through the vessel cross section, light spots are created when the electron beam hits the rod. In the camera recording, the entire cross section of the magnetic field gradually becomes visible.

Despite all of these problems, tests on the completed stellarator to maintain the sub-millimeter accuracy for the plasma path are progressing and show promise. In one recent test, an electron beam was injected into the stellarator and progressed along a predetermined field line in the circular tracks through the evacuated plasma vessel. As it moved through the machine, the beam created a tracer in its wake created by collisions with electrons contained in the residual gas in the vessel.

Photograph that combines the tracer of an electron beam on its multiple circulation along the inside of the containment vessel (Click Image To Enlarge)

Meanwhile, as the electron beam constantly circulated through the system, a fluorescent rod was pushed transversely through the vessel in cross section, and when the electron beam struck the rod, visible spots of light were created and the results recorded with a camera. In this way, the whole cross section of the magnetic field was gradually made visible.

"Once the flux surface diagnostics were placed in operation, we were immediately able to see the first magnetic surfaces. Our images clearly show how magnetic field lines create closed surfaces in many toroidal circulations."

Coil tests are conducted in the control room, the measured data from all test series are brought together and evaluated (Click Image To Enlarge)

Whilst in itself just another stepping stone toward the ultimate goal of practical fusion energy, the IPP stellarator is an important juncture in the field. With tokamak-based reactors still requiring more energy in than they actually produce, both the scientific and general public alike have grown wary of the long-held promises surrounding nuclear fusion. And, though many bodies, such as the University of Washington, Lockheed-Martin, and MIT, claim to be "close" to producing a working, sustainable, self-powering machine, nuclear fusion still remains a pipe dream.

This is where IPP's proving of the technology over the coming months leading to a full-blown commissioning of the machine may well provide the nexus between theory and practicality and, if not deliver on the promise of boundless energy, at least provide a proof of concept and renew flagging interest in a field that may, one day, solve all of our energy needs.

With approval to continue from nuclear regulators in Germany expected by the end of this month, the Wendelstein 7-x stellarator is slated for its first fully-operational tests in November this year. At a cost of more than €1 billion ($US 1.1 billion) and over 1 million man-hours of work committed so far, the hopes of Europe's future being a nuclear fusion-powered one may well rest on the ability of this machine to perform as expected. Watch this space.

COMMENTARY: The objective of fusion research like that being conducted by the Max Planck Institute for Plasma Physics (IPP)is to develop a power source that is friendly to the climate and the environment. Similarly to the sun, it harvests energy from the fusion of atomic nuclei. To light the fusion fire in a future power station, the fuel – a hydrogen plasma – must be confined in magnetic fields and heated to a temperature of over 100 million degrees. The Wendelstein 7-X, which will be the largest stellarator-type fusion device in the world, will not produce energy but will enable the suitability of this type of device as a power station to be investigated. With plasma discharges lasting up to 30 minutes, it should demonstrate its significant property – its ability to operate continuously.

A ring of 50 superconducting magnetic coils approximately 3.5 metres in height, is the key component of the device. Cooled with liquid helium to the superconducting temperature which is near to absolute zero, once switched on, they consume very little energy. Their special shapes are the result of refined optimisation calculations. Their task is to create a magnetic cage for the plasma with particularly good thermal-insulation properties.

In May 2014 the assembly of Wendelstein 7-X was completed on time and for over a year the preparations for operation have been under way. One by one, the operation of each technical system is being tested. From the end of April to the beginning of July 2015, attention was turned to the magnetic coils. As soon as the functional capability of these central system components was confirmed (see IPP Info 6/15), the testing of the magnetic surfaces was carried out. Configuration of the computer-supported data collection for the experimental operation is still to be carried out and in the periphery of the device the equipment for monitoring and heating the plasma requires completion. The objective: the Wendelstein 7-X should produce the first plasma this year.

Let's wish the physicists at IPP much success in taking the first step in the development of sustainable, self-powering, clean and efficient fusion energy. This sort of science was thought to be impossible due to the ultra-high temperatures required to create fusion energy. The radical Wendelstein 7-X stellarator, with its wacky, twisty, donut-shaped containment vessel, appears to be viable in containing the super-hot plasma, according to early tests. We hope that fusion energy theory becomes reality, and during the first real test in November, and that there are no dangerous accidents. Would hate to see $1.1 billion go up in smitherings.

10/23/2015

When the region of the brain, called the claustrum, is electrically stimulated, consciousness — self-awareness, sentience, whatever you want to call it — appears to turn off completely. (Click Image To Enlarge)

Researchers at George Washington University are reporting that they’ve discovered the human consciousness on-off switch, deep within the brain. When this region of the brain, called the claustrum, is electrically stimulated, consciousness — self-awareness, sentience, whatever you want to call it — appears to turn off completely. When the stimulation is removed, consciousness returns. The claustrum seems to bind together all of our senses, perceptions, and computations into single, cohesive experience. This could have massive repercussions for people currently in a minimally conscious state (i.e. a coma), and for deciding once and for all which organisms are actually conscious. Are monkeys conscious? Cats and dogs? A fetus?

When it comes to human consciousness, much like the rest of our brain’s operation, there isn’t a whole lot in the way of actual scientific knowledge. Despite a century of “modern” neuroscience, we still only have a rough sketch of how the human brain works. Most theories, though, generally agree that consciousness is probably created by a part of the brain that integrates activity from different regions of the brain into a single, holistic experience. To put it in (very loose) computing terms, this seat of human consciousness would be somewhat like a CPU; without it, you’d just have a bunch of different parts that are theoretically functional, but not really capable of getting anything useful done.

The claustrum, below the neocortex, in a human brain (Click Image To Enlarge)

The research, led by Mohamad Koubeissi at GWU in Washington DC, was originally tasked with analyzing a woman with epilepsy. The neuroscientists were stimulating regions of the brain with electrodes in an attempt to discover where her seizures originated from. Then, when they stimulated the claustrum — a thin region of the brain underneath the neocortex — the patient slowly lost consciousness. When the stimulation was removed, consciousness returned. When the claustrum was stimulated, the woman just stopped whatever she was doing (speaking, reading, moving) and stared blankly into space; when stimulation was removed, she continued as normal with no recollection of what had just happened. [DOI: 10.1016/j.yebeh.2014.05.027 –“Electrical stimulation of a small brain area reversibly disrupts consciousness”]

As you might expect when it comes to bleeding-edge neuroscience, there are some caveats to the research — most notably, the study only looked at the brain of one person, and due to her epilepsy (and previous removal of part of her hippocampus) she doesn’t necessarily represent a “normal” brain. In short, more research needs to be done — and following the publishing of this paper, you can be guaranteed that there will be more research into the claustrum.

COMMENTARY: While most of us automatically identify the brain as the headquarters of our awareness, neuroscientists seek a more precise location (and understanding) for this unique if everyday phenomenon. A new study by neuroscientists of Vanderbilt University finds that consciousness does not make its home in just one brain region. Instead, researchers say, awareness degrades the brain’s modular function and substitutes an integrated connectivity in which widespread communication arises across areas of the cortex. Consciousness, then, arises from cooperative and not solo brain activity.

Since the beginning of thought, philosophers have wondered where we derive our consciousness, and with the advent of sophisticated imaging technologies, neuroscientists have begun to explore this question in steadily increasing depth. Most recently, a 2014 study suggested that one region of the brain works as an on/off switch for awareness — when researchers electrically stimulated the claustrum of a patient (see video below), she instantly became unconscious. While this experiment does not prove consciousness resides in the claustrum, it raised many questions about the function of this unusual brain region: a thin, irregular structure of neurons hidden beneath the surface of the neocortex.

Francis Crick and Christof Koch, two pioneers in the field of human consciousness, theorize the claustrum functions as “a conductor coordinating a group of players in the orchestra, the various cortical regions.” Their hypothesis is based on the fact that the claustrum receives input from — and projects back to — almost all regions of the outside layer of the brain, the cortex.

Graphs and Images

For the current study, Vanderbilt University researchers investigated whether one or just a few areas of the brain might produce awareness. To accomplish their work, they used graph theory, a branch of mathematics focused on understanding highly complex, advanced networks, and a simple brain imaging experiment.

The experiment began with participants lying down on the hard bed of an MRI scanner. While researchers observed, participants performed a simple task of detecting a disk as it briefly flashed on the screen before them. After each participant completed a number of trials, the researchers compared all the results. They labeled those tests when participants detected the disk as “aware” and those when they missed the disk as “unaware.”

Upon analysis, the researchers discovered that no one area or network of areas in the brain stood out as particularly active during awareness. In fact, the whole brain appeared to become more connected following each report of awareness. The authors wrote.

Douglass Godwin, one of the authors of the study and a graduate student at Vanderbilt, told KurzweilAI.

“We take for granted how unified our experience of the world is. We don’t experience separate visual and auditory worlds; it’s all integrated into a single conscious experience. This widespread cross-network communication makes sense.”

Courtesy of an article dated July 14, 2014 appearing in Extreme Tech and an article dated March 17, 2015 appearing in Medical Daily

07/14/2015

Just minutes before the long-awaited flyby took place at 7:49AM ET, NASA "teased" the final full-frame color image of Pluto set to be released before the event by publishing it on Instagram. It was taken at about 4PM ET on July 13th, according to NASA, from 476,000 miles away. The high-resolution image was released after the flyby, and can be seen above.

Final image of the dwarf planet Pluto taken by the New Horizon spacecraft (Click Image To Enlarge)

In the above image, we can see the "heart" of Pluto in much greater detail than before, craters that were impossible to make out in previous images, and a great view of the dwarf planet's dark equatorial belt.

The New Horizons team celebrates the new image of Pluto (Click Image To Enlarge)

There are more images of the face of pluto to come. The first true high-resolution mosaic image will be released tomorrow afternoon, and a few more will be released throughout the week. A much larger set will be released starting in September.

NASA jubilantly announces the successful flyby of Pluto by the New Horizons spacecraft with the following tweet:

Click Image To Enlarge

While the chance is around one in 10,000 that New Horizons will come into contact with debris during the flyby, spirits are high at mission control in Maryland. Ralph Semmel, director of the Johns Hopkins Applied Physics Laboratory, said.

"Tonight we're going to get the signal — and we will get the signal,"

NASA uploaded the following documentary video which details the journey of the New Horizons spacecraft from its early beginning to its flyby of Pluto.

The following infographic explains the mission of the New Horizons spacecraft beginining with its launch in 2006:

Click Image To Enlarge

COMMENTARY: It's incredible that after 3 billion miles and over 9 years, the New Horizons spacecraft was able to flyby the dwarf planet Pluto at a distance of about 2,700 miles from its surface. That is one incredible feat. I can hardly wait for those closeup images of the surface of Pluto. According to NASA, the pictures are being sent back to Earth using technology that existed nine years ago, so tbe process will be very slow and take nearly a year and a half to complete.

I am still find it hard to believe that Pluto was only discovered in 1930 and it appeared as a very faint and small speck in the vastness out outerspace with thousands of stars in its background. BTW, the some of the ashes of Clyde Tombough, the original discoverer of Pluto, are carried on board New Horizons.

06/11/2015

NASA's Dawn spacecraft has snapped the best-ever images of the dwarf planet Ceres' bright spots, but the strange features still have researchers scratching their heads.

Dawn is a space probe launched by NASA in September 2007 with the mission of studying two of the three known protoplanets of the asteroid belt, Vesta and Ceres, the largest body in the Asteroid Belt. It is currently in orbit about its second target, the dwarf planet Ceres.

Dawn uses an ion propulsion system and arrive at Ceres on March 5, 2015 when the unidentified bright spots were first detected (see below video).

New images of dwarf planet Ceres, taken by NASA's Dawn spacecraft, show the cratered surface of this mysterious world in sharper detail than ever before. These are among the first snapshots from Dawn's second mapping orbit, which is 2,700 miles (4,400 kilometers) above Ceres.

Mysterious bright spots on Ceres taken by the Dawn spaceprobe from a distance of 2,700 miles on June 6, 2015 - NASA-JPL (Click Image To Enlarge)

The new photos resolve the' bright spots on Ceres into numerous points of varying sizes. The spots consist of many individual bright points of differing sizes, with a central cluster. So far, scientists have found no obvious explanation for their observed locations or brightness levels.

The brightest ones lie within a crater about 55 miles (90 kilometers) wide, researchers said. You can see a video tour of Ceres' strange white spots on Space.com that shows how the odd features have come into focus for Dawn over the last two months. Chris Russell, principal investigator for the Dawn mission based at the University of California, Los Angeles, said.

"The bright spots in this configuration make Ceres unique from anything we've seen before in the solar system. The science team is working to understand their source. Reflection from ice is the leading candidate in my mind, but the team continues to consider alternate possibilities, such as salt. With closer views from the new orbit and multiple view angles, we soon will be better able to determine the nature of this enigmatic phenomenon."

Numerous other features on Ceres intrigue scientists as they contrast this world with others, including protoplanet Vesta, which Dawn visited for 14 months in 2011 and 2012. Craters abound on both bodies, but Ceres appears to have had more activity on its surface, with evidence of flows, landslides and collapsed structures.

Additionally, new images from Dawn's visible and infrared mapping spectrometer (VIR) show a portion of Ceres' cratered northern hemisphere, taken on May 16, including a true-color view and a temperature image. The temperature image is derived from data in the infrared light range. This instrument is also important in determining the nature of the bright spots.

Having arrived in its current orbit on June 3, Dawn will observe the dwarf planet from 2,700 miles (4,400 kilometers) above its surface until June 28. In orbits of about three days each, the spacecraft will conduct intensive observations of Ceres. It will then move toward its next orbit of altitude 900 miles (1,450 kilometers), arriving in early August.

On March 6, 2015, Dawn made history as the first mission to visit a dwarf planet, and the first to orbit two distinct extraterrestrial targets. At its previous target, Vesta, Dawn took tens of thousands of images and made many observations about the body's composition and other properties.

Dawn principal investigator Chris Russell of UCLA said in a statement.

"The bright spots in this configuration make Ceres unique from anything we've seen before in the solar system. The science team is working to understand their source. Reflection from ice is the leading candidate in my mind, but the team continues to consider alternate possibilities, such as salt." [More Photos of the Dwarf Planet Ceres]

Dawn's mission is managed by JPL for NASA's Science Mission Directorate in Washington. Dawn is a project of the directorate's Discovery Program, managed by NASA's Marshall Space Flight Center in Huntsville, Alabama. UCLA is responsible for overall Dawn mission science. Orbital ATK Inc., in Dulles, Virginia, designed and built the spacecraft. The German Aerospace Center, Max Planck Institute for Solar System Research, Italian Space Agency and Italian National Astrophysical Institute are international partners on the mission team.

Courtesy of an article dated June 11, 2015 appearing in Space.com and an article dated March 26, 2015 appearing in Space.com

08/12/2014

NASA is a major player in space science, so when a team from the agency this week presents evidence that "impossible" microwave thrusters seem to work, something strange is definitely going on. Either the results are completely wrong, or NASA has confirmed a major breakthrough in space propulsion.

Roger Shawyer (left), receiving a DTI SMART Award for his EmDrive concept in August 2001. (Click Image To Enlarge)

British scientist Roger Shawyer has been trying to interest people in his EmDrivefor some years through his company Satellite Propulsion Research Ltd (SPR Ltd). Shawyer claims the EmDrive converts electric power into thrust, without the need for any propellant by bouncing microwaves around in a closed container. He has built a number of demonstration systems, but critics reject his relativity-based theory and insist that, according to the law of conservation of momentum, it cannot work.

The EmDrive itself is simply a microwave resonating cavity in the form of a closed, truncated cone (See below). You fire up a big electrically-powered microwave generator and start beaming microwaves inside this thing, and the microwaves bounce around all over the place, exerting radiation pressure on the inside of the cavity.

According to Shawyer, the EmDrive is able to extert a small amount of thrust that propels it towards the large side of the cone. Shawyer says this happens because inside the resonating cavity, the velocity of the microwaves changes significantly as the cavity diameter varies. The velocity changes enough, in fact, to exert a larger force on the larger end of the cavity, and a smaller force on the smaller end of the cavity, resulting in net thrust.

Prototype of the EmDrive microwave thruster engine developed by scientists at NASA. (Click Image To Enlarge)

SPR's EmDrive Demonstrator Engine (Side View)

SPR's EmDrive Demonstrator Engine (Front View) mounted on a test rig

A video clip of the initial part of an acceleration test run by SPR can be seen on YouTube:

The field strengths within the thruster equate to a power level of 17MW. Signal leakage causes EMC effects within the fixed video camera. This leads to the apparent vertical movements.

The engine only starts to accelerate when the magnetron frequency locks to the resonant frequency of the thruster, following an initial warm up period. This test operation eliminates possible spurious forces.

The rotary air bearing supports a total load of 100kg, with a friction torque resulting in a calibrated resistance force of 8.2 gm at the engine centre of thrust.

For this test a thrust of 96 mN was recorded for an input power of 334 W.

Research to confirm the results of the EmDrive microwave thruster engine developed by Roger Shawyer came from a team of Chinese researchers headed by Yang Juan, Professor of Propulsion Theory and Engineering of Aeronautics and Astronautics at the Northwestern Polytechnic University in Xi'an and the findings published in a research paper titled "Net thrust measurement of propellantless microwave thruster." The research paper was originally written on June 9, 2011 and finally published in 2012 in the academic journal Acta Physica Sinica, now translated into English.

Yang Juan, Professor of Propulsion Theory and Engineering of Aeronautics and Astronautics at the Northwestern Polytechnic University in Xi'an. (Click Image To Enlarge)

The Chinese team led by Professor Yang Juan built its own EmDrive and confirmed that it produced 720 mN (about 72 grams) of thrust, enough for a practical satellite thruster. Such a thruster could be powered by solar electricity, eliminating the need for the supply of propellant that occupies up to half the launch mass of many satellites. The Chinese work attracted little attention; it seems that nobody in the West believed in it.

However, a US scientist, Guido Fetta, has built his own propellant-less microwave thruster, and managed to persuade NASA to test it out. The test results were presented on July 30 at the 50th Joint Propulsion Conference in Cleveland, Ohio. Astonishingly enough, they are positive.

NASA tested a different version of the EmDrive called the Cannae Drive designed by Guido Fetta. (Click Image To Enlarge)

According to Guido Fetta, the "Cannae Drive," was named after the Battle of Cannae in which Hannibal decisively defeated a much stronger Roman army: you're at your best when you are in a tight corner. However, it's hard not to suspect that Star Trek's Engineer Scott -- "I cannae change the laws of physics" -- might also be an influence. (It was formerly known as the Q-Drive.)

The five NASA research team spent six days setting up test equipment followed by two days of experiments with various configurations. These tests included using a "null drive" similar to the live version but modified so it would not work, and using a device which would produce the same load on the apparatus to establish whether the effect might be produced by some effect unrelated to the actual drive. They also turned the drive around the other way to check whether that had any effect.

In January 2014, the NASA research team also tested Shawyer's EmDrive design. The test results for this were also positive, and in fact their tapered-cavity drive, derived from the Chinese drive which is in turn based on Shawyer's EmDrive, produced 91 micronewtons of thrust for 17 watts of power, compared to the 40 micronewtons of thrust from 28 watts for the Cannae drive.

In her research paper, professor Yang Juan describes China's iteration of Shawyer's EmDrive that's able to generate 72 grams of thrust with 2,500 watts of electricity. It doesn't sound like a huge amount, but if you compare it to the hands-down most efficient spacecraft engine we've got right now (where efficiency is at an absolute premium), an ion thruster, the Chinese EmDrive gets you four times as much thrust from half as much power without sucking down any fuel at all. Yeah, you need electricity, but electricity is cheap in space and cheaper on the ground. Anyway, you can read the paperhere, and if you can make conclusive heads or tails of it, please do us all a favor and explain it in the comments. Below is an infographic comparing the Chinese EmDrive with the European Space Agency's SMART-1 ion engine:

Click Image To Enlarge

Back in the 90s, NASA tested what was claimed to be an antigravity device based on spinning superconducting discs. That was reported to give good test results, until researchers realised that interference from the device was affecting their measuring instruments. They have probably learned a lot since then.

The torsion balance they used to test the thrust was sensitive enough to detect a thrust of less than ten micronewtons, but the drive actually produced 30 to 50 micronewtons -- less than a thousandth of the Chinese results, but emphatically a positive result, in spite of the law of conservation of momentum:

"Test results indicate that the RF resonant cavity thruster design, which is unique as an electric propulsion device, is producing a force that is not attributable to any classical electromagnetic phenomenon and therefore is potentially demonstrating an interaction with the quantum vacuum virtual plasma."

This last line implies that the drive may work by pushing against the ghostly cloud of particles and anti-particles that are constantly popping into being and disappearing again in empty space. But the NASA team has avoided trying to explain its results in favour of simply reporting what it found:

"This paper will not address the physics of the quantum vacuum plasma thruster, but instead will describe the test integration, test operations, and the results obtained from the test campaign."

Shawyer himself, who sent test examples of the EmDrive to the US in 2009, sees the similarity between the two.

He believes the design accounts for the Cannae Drive's comparatively low thrust. He says.

"From what I understand of the NASA and Cannae work -- their RF thruster actually operates along similar lines to EmDrive, except that the asymmetric force derives from a reduced reflection coefficient at one end plate. Of course this degrades the Q and hence the specific thrust that can be obtained."

Fetta is working on a number of projects which he is not able to discuss at present, and NASA's PR team was not able to get any comments from the research team. However, it's fair to assume that the results will be picked over very closely indeed, like CERN's anomalous faster-than-light neutrinos. The neutrino issue was cleared up fairly quickly, but given that this appears to be at least the third independent propellant-less thruster to work in tests, the anomalous thrust may prove much harder to explain away.

The NASA paper projects a 'conservative' manned mission to Mars from Earth orbit, with a 90-ton spacecraft driven by the new technology. Using a 2-megawatt nuclear power source, it can develop 800 newtons (180 pounds) of thrust. The entire mission would take eight months, including a 70-day stay on Mars.

This compares with NASA's plans using conventional technology which takes six months just to get there, and requires several hundred tons to be put into Earth's orbit to start with. You also have to stay there for at least 18 months while you wait for the planets to align again for the journey back. The new drive provides enough thrust to overcome the gravitational attraction of the Sun at these distances, which makes manoeuvring much easier.

A less conservative projection has an advanced drive developing ten times as much thrust for the same power -- this cuts the transit time to Mars to 28 days, and can generally fly around the solar system at will, a true NASA dream machine.

COMMENTARY: The validation of the Roger Shawyer's electromagnetic drive or EmDrive appears to be a potential gamebreaker for the nation that can develop a fully-functional and scalable microwave-powered EmDrive that can prove its efficiency and reliability in outerspace.

A propellantless rocket thruster that will meet or exceed the requirements needed for future manned space missions to Mars or even neighboring star systems, will depend on fully exploiting this EmDrive technology to its fullest. In fact, the future of mankind could rest on just such a propellantless rocket thruster.

The World is rapidly running out of natural resources used in developing rocket propellants, and since anti-magnetic propulsion systems do not appear to be in our immediate future, ion-powered engine thrusters and EmDrive powered systems are two options that are open to exploitation.

I believe that our country to do everything possible to develop such a propellantless microwave-powered engine thruster for future space travel.

06/22/2014

Graphene transistors visible on a piece of flexible plastic. Graphene is not only the hardest material in the world, but also one of the most pliable. (Click Image To Enlarge)

I just want to say one word to you. Just one word.

No, fans of “The Graduate,” the word isn’t “plastics.”

It’s “graphene.”

Graphene is the strongest, thinnest material known to exist. A form of carbon, it can conduct electricity and heat better than anything else. And get ready for this: It is not only the hardest material in the world, but also one of the most pliable.

(Click Image To Enlarge)

Only a single atom thick, it has been called the wonder material.

Graphene could change the electronics industry, ushering in flexible devices, supercharged quantum computers, electronic clothing and computers that can interface with the cells in your body.

While the material was discovered a decade ago, it started to gain attention in 2010 when two physicists at the University of Manchester were awarded the Nobel Prize for their experiments with it. More recently, researchers have zeroed in on how to commercially produce graphene.

Graphene, often touted as a miracle material, is also as brittle as ordinary ceramic and susceptible to crack. (Click Image To Enlarge)

The American Chemical Society said in 2012 that graphene was discovered to be 200 times stronger than steel and so thin that a single ounce of it could cover 28 football fields. Chinese scientists have created a graphene aerogel, an ultralight material derived from a gel, that is one-seventh the weight of air. A cubic inch of the material could balance on one blade of grass.

“Graphene is one of the few materials in the world that is transparent, conductive and flexible — all at the same time. All of these properties together are extremely rare to find in one material.”

So what do you do with graphene? Physicists and researchers say that we will soon be able to make electronics that are thinner, faster and cheaper than anything based on silicon, with the option of making them clear and flexible. Long-lasting batteries that can be submerged in water are another possibility.

Click Image To Enlarge

In 2011, researchers at Northwestern University built a battery that incorporated graphene and silicon, which the university said couldlead to a cellphone that “stayed charged for more than a week and recharged in just 15 minutes.” In 2012, the American Chemical Society said that advancements in graphene were leading to touch-screen electronics that “could make cellphones as thin as a piece of paper and foldable enough to slip into a pocket.”

Dr. Vijayaraghavan is building an array of sensors out of graphene — including gas sensors, biosensors and light sensors — that are far smaller than what has come before.

Scientists at Samsung's Advanced Institute of Technology (SAIT) and Sungkyunkwan University in South Korea discovery a new method for growing large area, single crystal wafer scale graphene. (Click Image To Enlarge)

And in April 2014, researchers at the Samsung Advanced Institute of Technology, working with Sungkyunkwan University in South Korea, said that Samsung had figured out how to create high-quality graphene on silicon wafers, which could be used for the production of graphene transistors. Samsung said in a statement that these advancements meant it could start making “flexible displays, wearables and other next-generation electronic devices.”

Sebastian Anthony, a reporter at Extreme Tech, said that Samsung’s breakthrough could end up being the “holy grail of commercial graphene production.”

Samsung is not the only company working to develop graphene. Researchers at IBM, Nokia and SanDisk have been experimenting with the material to create sensors, transistors and memory storage.

When these electronics finally hit store shelves, they could look and feel like nothing we’ve ever seen.

James Hone, a professor of mechanical engineering at Columbia University, said research in his lab led to the discovery that graphene could stretch by 20 percent while still remaining able to conduct electricity. He said.

“You know what else you can stretch by 20 percent? Rubber. In comparison, silicon, which is in today’s electronics, can only stretch by 1 percent before it cracks.”

He continued:

“That’s just one of the crazy things about this material — there’s really nothing else quite like it.”

The real kicker? Graphene is inexpensive.

If you think of something in today’s electronics industry, it can most likely be made better, smaller and cheaper with graphene.

Scientists at the University of California, Berkeley made graphene speakers last year that delivered sound at quality equal to or better than a pair of commercial Sennheiser earphones. And they were much smaller.

Another fascinating aspect of graphene is its ability to be submerged in liquids without oxidizing, unlike other conductive materials.

As a result, Dr. Vijayaraghavan said, graphene research is leading to experiments where electronics can integrate with biological systems. In other words, you could have a graphene gadget implanted in you that could read your nervous system or talk to your cells.

But while researchers believe graphene will be used in next-generation gadgets, there are entire industries that build electronics using traditional silicon chips and transistors, and they could be slow to adopt graphene counterparts.

If that is the case, graphene might end up being used in other industries before it becomes part of electronics. Last year, the Bill and Melinda Gates Foundation paid for the development of a graphene-based condom that is thin, light and impenetrable. Carmakers are exploring building electronic cars with bodies made of graphene that are not only protective, but act as solar panels that charge the car’s battery. Airline makers also hope to build planes out of graphene.

If all that isn’t enough, an international team of researchers based at M.I.T. has performed tests that could lead to the creation of quantum computers, which would be a big market of computing in the future.

So forget plastics. There’s a great future in graphene. Think about it.

COMMENTARY: Graphene may be one of the strongest materials on the planet, but a new study raises questions about the limits of using it in the real world.

When material scientists measured the fracture toughness of imperfect graphene for the first time, they found it to be somewhat brittle.

While it’s still very useful, graphene is really only as strong as its weakest link, which they determined to be “substantially lower” than the intrinsic strength of graphene.

An electron microscope image shows a pre-crack in a suspended sheet of graphene used to measure the overall strength of the sheet - The Nanomaterials, Nanomechanics and Nanodevices Lab-Rice University. (Click Image To Enlarge)

A pre-cracked sheet of graphene was suspended and pulled apart - The Nanomaterials, Nanomechanics and Nanodevices Lab-Rice University. (Click Image To Enlarge)

Ting Zhu, an associate professor at the Georgia Institute of Technology, says.

“Graphene has exceptional physical properties, but to use it in real applications, we have to understand the useful strength of large-area graphene, which is controlled by the fracture toughness,”

Zhu and Jun Lou, an associate professor at Rice University, report in the journal Nature Communications the results of tests in which they physically pulled graphene apart to see how much force it would take. Specifically, they wanted to see if graphene follows the century-old Griffith theory that quantifies the useful strength of brittle materials.

It does, Lou says.

“Remarkably, in this case, thermodynamic energy still rules.”

PERFECT VS. IMPERFECT

Imperfections in graphene drastically lessen its strength—with an upper limit of about 100 gigapascals (GPa) for perfect graphene previously measured by nanoindentation—according to physical testing at Rice and molecular dynamics simulations at Georgia Tech.

That’s important for engineers to understand as they think about using graphene for flexible electronics, composite material, and other applications in which stresses on microscopic flaws could lead to failure.

The Griffith criterion developed by a British engineer during World War I describes the relationship between the size of a crack in a material and the force required to make that crack grow. Ultimately, A.A. Griffith hoped to understand why brittle materials fail.

Graphene, it turns out, is no different from the glass fibers Griffith tested.

Lou says.

“Everybody thinks the carbon-carbon bond is the strongest bond in nature, so the material must be very good, but that’s not true anymore, once you have those defects. The larger the sheet, the higher the probability of defects. That’s well known in the ceramic community.”

A defect can be as small as an atom missing from the hexagonal lattice of graphene. But for a real-world test, the researchers had to make a defect of their own—a pre-crack—they could actually see.

He says.

“We know there will be pinholes and other defects in graphene. The pre-crack overshadows those defects to become the weakest spot, so I know exactly where the fracture will happen when we pull it."

He adds.

“The material resistance to the crack growth—the fracture toughness—is what we’re measuring here, and that’s a very important engineering property.”

Additional researchers from Rice, Georgia Tech, Nanyang Technological University in Singapore, and at Tianjin Polytechnic University in China collaborated on the project, which received support from Welch Foundation, the National Science Foundation, the US Office of Naval Research, and the Korean Institute of Machinery and Materials.

01/28/2014

Supernova 2014J has brightened to 11th magnitude in M82 off the Big Dipper. It's visible in amateur telescopes during evening.

A surprise supernova has erupted in M82, the famous nearby irregular galaxy in Ursa Major. Observers are reporting it at about magnitude 11.3 as of Thursday, January 23rd, with a color on the orange side of white.

The supernova in M82 as imaged by Leonid Elenin (Lyubertsy, Russia) and I. Molotov (Moscow, Russia) on Jan. 22.396. It's located at right ascension 9h 55m 42.2s, declination +69° 40′ 26″. It was V magnitude 11.7 at the time. Image by Leonid Elenin. (Click Image To Enlarge)

A spectrum reported by Yi Cao and colleagues (Caltech) suggests that it may still be two weeks away from reaching its peak brightness. Spectra show it to be a Type Ia supernova — an exploded white dwarf — with debris expanding at 20,000 kilometers per second. It is reddened, and hence must also be dimmed, by dust in M82 along our line of sight.

The M81 Supernova before and after images (Click Image To Enlarge)

M82 is a near neighbor as galaxies go, at a distance of 11 or 12 million light-years. It's a favorite for amateur astronomers and researchers alike, with its thick dust bands, sprays of gas, and bright center undergoing massive star formation. The supernova is not in the central star-forming region but off to one side, 58 arcseconds to the west-southwest.

Remarkably, the supernova went undiscovered for a week as it brightened. Prediscovery unfiltered CCD images by K. Itagaki of Yamagata, Japan, show nothing at its location to as faint as magnitude 17.0 through January 14.5. But on January 15.57 is was magnitude 14.4; on January 16.64 it was 13.9; on January 17.61, 13.3; January 19.62, 12.2; and January 20.62, 11.9. Images.

This is the starburst galaxy M82 imaged by Hubble in 2006, with approximate location of the #supernova noted. (Click Image To Enlarge)

M82 is well up in the northeastern sky by 7 or 8 p.m. (for observers at mid-northern latitudes). The waning Moon doesn't rise until much later.

The new point of light received the name Supernova 2014J once its nature was confirmed. It originally went by the preliminary designation PSN J09554214+6940260.

Animation of the M81 Supernova. (Click Image To Enlarge)

Here's acomparison-star chart from the American Association of Variable Star Observers (AAVSO). North is up, east is left, the chart is 1° wide, and stars are plotted to magnitude 13.5. If you want other parameters, or if the link fails, make your own chart using the AAVSO Variable Star Plotter. For the star name enter SN 2014J. The chart does not plot the galaxy.

The location of the M82 galaxy where the supernova occured is located near M81 just above the Big Dipper facing north. (Click Image To Enlarge)

A Flukey Find

The first people to recognize the supernova were a group of students — Ben Cooke, Tom Wright, Matthew Wilde and Guy Pollack, assisted by teaching fellow Stephen J. Fossey — taking a quick image at the University College London Observatory (within the London city limits!) on the evening of January 21st, at 19:20 UT.

"The discovery was a fluke, a 10-minute telescope workshop for undergraduate students that led to a global scramble to acquire confirming images and spectra."

Fossey says.

'The weather was closing in, with increasing cloud, so instead of the planned practical astronomy class, I gave the students an introductory demonstration of how to use the CCD camera on one of the observatory’s automated 0.35-meter telescopes. The students chose M82, a bright and photogenic galaxy, as their target, as it was in one of the shrinking patches of clear sky. While adjusting the telescope’s position, Fossey noticed a star overlaid on the galaxy which he did not recognise from previous observations. They inspected online archive images of the galaxy, and it became apparent that there was indeed a new starlike object in M82. With clouds closing in, they switched to taking a rapid series of 1- and 2-minute exposures through different colour filters to check that the object persisted, and to be able to measure its brightness and colour."

The original press release, and the BBC repeating it, claimed that this is the nearest supernova since Supernova 1987A in the Large Magellanic Cloud. In fact SN 1993J in M81 was at essentially the same distance within the uncertainties, and two subsequent supernovae, SN 2004am and SN 2008iz (an obscured radio supernova), occurred within M82 itself.

Several hours later, the lander will deploy a robotic rover called Yutu, which translates as "Jade Rabbit".

The touchdown took place on a flat plain called Sinus Iridum.

The Chang'e-3 mission launched atop a Chinese-developed Long March 3B rocket on 1 December from Xichang in the country's south.

China's space mission team celebrate after the landing

The official Xinhua news service reported that the craft began its descent just after 1300 GMT (2100 Beijing time), touching down in Sinus Iridum (the Bay of Rainbows) 11 minutes later.

State television showed pictures of the moon's surface as the lander touched down and an eye-level view of the landing site was released later on Saturday. Staff at mission control in Beijing clapped and celebrated after confirmation came through.

The probe's soft-landing was the most difficult task during the mission, Wu Weiren, the lunar programme's chief designer, told Xinhua.

Chinese scientists tested the moon rover ahead of its launch. It is expected to land on the Moon on December 14. (Click Image To Enlarge)

It is the third robotic rover mission to land on the lunar surface, but the Chinese vehicle carries a more sophisticated payload than previous missions, including ground-penetrating radar which will gather measurements of the lunar soil and crust.

"It's still a significant technological challenge to land on another world," said Peter Bond, consultant editor for Jane's Space Systems and Industry told the AP news agency.

"You have to use rocket motors for the descent and you have to make sure you go down at the right angle and the right rate of descent and you don't end up in a crater or on top of a large rock."

The landing module actively reduced its speed at about 15km from the Moon's surface.

Chinese scientists celebrated at the ontrol center in Beijng after China's first lunar rover touched down on the surface of the Moon. (Click Image To Enlarge)

When it reached a distance of 100m from the surface, the craft fired thrusters to slow its descent.

At a distance of 4m, the lander switched off the thrusters and fell to the lunar surface.

The Jade Rabbit was expected to be deployed several hours after touchdown, driving down a ramp lowered by the landing module.

The first time China launched an unmanned spacecraft was in 1999, pictured. It is the only the third country to have done so, after Russia and the US. (Click Image To Enlarge)

Reports suggest the lander and rover will photograph each other at some point on Sunday.

According to Chinese space scientists, the mission is designed to test new technologies, gather scientific data and build intellectual expertise, as well as scouting for mineral resources that could eventually be mined.

Schematic showing how the Jade Rabbit robotic rover fired its retro-rockets to make a soft landing on the surface of the Moon. (Click Image To Enlarge)

Sun Huixian, a space engineer with the Chinese lunar programme, said.

"China's lunar program is an important component of mankind's activities to explore [the] peaceful use of space."

The 120kg (260lb) Jade Rabbit rover can reportedly climb slopes of up to 30 degrees and travel at 200m (660ft) per hour.

Its name - chosen in an online poll of 3.4 million voters - derives from an ancient Chinese myth about a rabbit living on the moon as the pet of the lunar goddess Chang'e.

The rover and lander are powered by solar panels but some sources suggest they also carry radioisotope heating units (RHUs), containing plutonium-238 to keep them warm during the cold lunar night.

Dean Cheng, a senior research fellow at the Heritage Foundation, a conservative think-tank in Washington DC, said China's space programme was a good fit with China's concept of "comprehensive national power". This might be described as a measure of a state's all-round capabilities.

Space exploration was, he told BBC News.

"It's a reflection of your economic power, because you need spare resources to have a space programme. It clearly has military implications because so much space technology is dual use".

He added:

"It reflects your scientific and technological capabilities, it supports your diplomacy by making you appear strong. China is saying: 'We are doing something that only two other countries have done before - the US and the Soviet Union."

Mr Cheng explained that the mission would also advertise the country as a destination for commercial space launches, as well as providing an opportunity to test China's deep-space tracking and communications.

"The rover will reportedly be under Earth control at various points of its manoeuvres on the lunar surface. Such a space observation and tracking system has implications not only for space exploration but for national security, as it can be used to maintain space surveillance, keeping watch over Chinese and other nations' space assets."

China has been methodically and patiently building up the key elements needed for an advanced space programme - from launchers to manned missions in Earth orbit to unmanned planetary craft - and it is investing heavily.

"China wants to go to the Moon for geostrategic reasons and domestic legitimacy. With the US exploration moribund at best, that opens a window for China to be perceived as the global technology leader - though the US still has more, and more advanced, assets in space."

The landing site is a flat volcanic plain, part of a larger feature known as Mare Imbrium that forms the right eye of the "Man in the Moon".

The lander will operate there for a year, while the rover is expected to work for some three months.

After this, a mission to bring samples of lunar soil back to Earth is planned for 2017. And this may set the stage for further robotic missions, and - perhaps - a crewed lunar mission in the 2020s.

COMMENTARY: Yutu is designed to roam the lunar surface for at least 90 Earth days – three Lunar days – covering an area of about five square kilometres.

It will send probes beneath the surface as well as taking high-resolution images of the rock, a flat area formed from the molten basalt released by lunar volcanoes several billion years ago.

The journey of the Chang’e-3 probe and its final landing will be closely monitored by the European Space Agency (ESA), which is cooperating closely with China. ESA’s own launch station in Kourou, French Guiana, will immediately start receiving signals from the mission after take-off and it will upload commands to the probe on behalf of the Chinese control centre.

Thomas Reiter, director of ESA’s human spaceflight operations, said.

"Whether for human or robotic missions, international cooperation like this is necessary for the future exploration of planets, moons and asteroids, benefitting everyone."

In recent years, China has made considerable progress in its space programme.

In June, three Chinese astronauts spent 15 days in orbit and docked their craft with an experimental space laboratory.

In 2007, the country despatched an unmanned spacecraft called Chang'e to orbit the Moon. The craft stayed in space for 16 months before being intentionally crashed on to the Moon's surface.

The name Jade Rabbit was chosen after an online poll in which millions took part.

Ouyang Ziyuan, head of the moon rover project, told Xinhua earlier this week that the ancient beliefs had their origins in the marks left by impacts on the lunar landscape.

'There are several black spots on the moon's surface. Our ancient people imagined they were a moon palace, osmanthus trees, and a jade rabbit,' he said.

China sent its first astronaut into space in 2003, becoming the third country after Russia and the United States to achieve manned space travel independently.

The military-backed space programme is a source of national pride.

China is one of only three countries to have managed to independently send humans into space, the others being Russia and the US.

Courtesy of an article dated December 14, 2013 appearing in BBC News and an article dated November 30, 2013 appearing in the Daily Mail

12/10/2013

The Nobel Prize in Physics 2013 was awarded jointly to François Englert (left) and Peter W. Higgs(right) "for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN's Large Hadron Collider"

Click Images To Enlarge

François Englert and Peter W. Higgs are jointly awarded the Nobel Prize in Physics 2013 for the theory of how particles acquire mass. In 1964, they proposed the theory independently of each other (Englert together with his now deceased colleague Robert Brout). In 2012, their ideas were confirmed by the discovery of a so called Higgs particle at the CERN laboratory outside Geneva in Switzerland..

The awarded theory is a central part of the Standard Model of particle physics that describes how the world is constructed. According to the Standard Model, everything, from flowers and people to stars and planets, consists of just a few building blocks: matter particles. These particles are governed by forces mediated by force particles that make sure everything works as it should.

The entire Standard Model also rests on the existence of a special kind of particle: the Higgs particle. This particle originates from an invisible field that fills up all space. Even when the universe seems empty this field is there. Without it, we would not exist, because it is from contact with the field that particles acquire mass. The theory proposed by Englert and Higgs describes this process.

On 4 July 2012, at the CERN laboratory for particle physics, the theory was confirmed by the discovery of a Higgs particle. CERN’s particle collider, LHC (Large Hadron Collider), is probably the largest and the most complex machine ever constructed by humans. Two research groups of some 3,000 scientists each, ATLAS and CMS, managed to extract the Higgs particle from billions of particle collisions in the LHC.

Even though it is a great achievement to have found the Higgs particle — the missing piece in the Standard Model puzzle — the Standard Model is not the final piece in the cosmic puzzle. One of the reasons for this is that the Standard Model treats certain particles, neutrinos, as being virtually massless, whereas recent studies show that they actually do have mass. Another reason is that the model only describes visible matter, which only accounts for one fifth of all matter in the cosmos. To find the mysterious dark matter is one of the objectives as scientists continue the chase of unknown particles at CERN.

François Baron Englert was born in 1932 and is a Belgian theoretical physicist and 2013 Novel prize laureate (shared with Peter Higgs). He is Professor emeritus at the Universite libre de Bruxelles (ULB) where he is member of the Service de Physique Théorique. He is also a Sackler Professor by Special Appointment in the School of Physics and Astronomy at Tel Aviv University and a member of the Institute for Quantum Studies at Chapman University in California. He was awarded the 2010 J.J. Sakurai Prize for Theoretical Particle Physics (with Gerry Guralnik, C.R. Hagen, Tom Kibble, Peter Higgs and Robert Brout), the Wolf Prize in Physics in 2004 (with Brout and Higgs) and the High Energy and Particle Prize of the European Physical Society (with Brout and Higgs) in 1997 for the mechanism which unifies short and long range interactions by generating massive gauge vector bosons. He has made contributions in statistical physics, quantum field theory, cosmology, string theory and supergravity. He is the recipient of the 2013 Prince of Asturias Award in technical and scientific research, together with Peter Higgs and the CERN.

Peter W. Higgs CH, FRS, FRSE was born in 1929 and is a British theoretical physicist, Nobel laureate and emeritus professor at the University of Edinburgh. He is best known for his 1960s proposal of broken symmetry in electroweak theory, explaining the origin of mass of elementary particles in general and of the W and Z bosons in particular. This so-called Higgs mechanism, which was proposed by several physicists besides Higgs at about the same time, predicts the existence of a new particle, the Higgs boson (which was often described as "the most sought-after particle in modern physics". CERN announced on 4 July 2012 that they had experimentally established the existence of a Higgs-like boson, but further work is needed to analyse its properties and see if it has the properties expected from the Standard Model Higgs boson. On 14 March 2013, the newly discovered particle was tentatively confirmed to be + parity and zero spin, two fundamental criteria of a Higgs boson, making it the first known fundamental scalar particle to be discovered in nature (although previously, composite scalars such as the K had been observed over half a century prior). The Higgs mechanism is generally accepted as an important ingredient in the Standard Model of particle physics, without which certain particles would have no mass.

Nobel Prize in Chemistry for 2013

The Nobel Prize in Chemistry 2013 was awarded jointly to Martin Karplus (left), Michael Levitt (middle) and Arieh Warshel (right) "for the development of multiscale models for complex chemical systems".

Click Images To Enlarge

Chemists used to create models of molecules using plastic balls and sticks. Today, the modelling is carried out in computers. In the 1970s, Martin Karplus, Michael Levitt and Arieh Warshel laid the foundation for the powerful programs that are used to understand and predict chemical processes. Computer models mirroring real life have become crucial for most advances made in chemistry today.

Chemical reactions occur at lightning speed. In a fraction of a millisecond, electrons jump from one atomic to the other. Classical chemistry has a hard time keeping up; it is virtually impossible to experimentally map every little step in a chemical process. Aided by the methods now awarded with the Nobel Prize in Chemistry, scientists let computers unveil chemical processes, such as a catalyst’s purification of exhaust fumes or the photo­synthesis in green leaves.

The work of Karplus, Levitt and Warshel is ground-breaking in that they managed to make Newton’s classical physics work side-by-side with the fundamentally different quantum physics. Previously, chemists had to choose to use either or. The strength of classical physics was that calculations were simple and could be used to model really large molecules. Its weakness, it offered no way to simulate chemical reactions. For that purpose, chemists instead had to use quantum physics. But such calculations required enormous computing power and could therefore only be carried out for small molecules.

This year’s Nobel Laureates in chemistry took the best from both worlds and devised methods that use both classical and quantum physics. For instance, in simulations of how a drug couples to its target protein in the body, the computer performs quantum theoretical calculations on those atoms in the target protein that interact with the drug. The rest of the large protein is simulated using less demanding classical physics.

Today the computer is just as important a tool for chemists as the test tube. Simulations are so realistic that they predict the outcome of traditional experiments.

Martin Karplus was born in 1930 and is an Austrian-born American theoretical chemist. He is the Theodore William Richards Professor of Chemistry, emeritus at Harvard University. He is also Director of the Biophysical Chemistry Laboratory, a joint laboratory between the French National Center for Scientific Research and the University of Strasbourg, France. Karplus received the 2013 Nobel Prize in Chemistry, together with Michael Levitt and Arieh Warshel, for "the development of multiscale models for complex chemical systems".

Michael Levitt, FRS was born in 1947 and is an American-British-Israeli biophysicist and a professor of structural biology at Stanford University, a position he has held since 1987. His research is in computational biology and he is a member of the National Academy of Sciences. Levitt received the 2013 Nobel Prize in Chemistry, together with Martin Karplus and Arieh Warshel, for "the development of multiscale models for complex chemical systems".

Arieh Warshel (Hebrew: אריה ורשל, was born in 1940 and is an Israeli-American Distinguished Professor of Chemistry and Biochemistry at the University of Southern California. He received the 2013 Nobel Prize in Chemistry, together with Michael Levitt and Martin Karplus for "the development of multiscale models for complex chemical systems".

Nobel Prize in Medicine for 2013

The Nobel Prize in Physiology or Medicine 2013 was awarded jointly to James E. Rothman (left), Randy W. Schekman (middle) and Thomas C. Südhof (right) "for their discoveries of machinery regulating vesicle traffic, a major transport system in our cells".

Click Images To Enlarge

The 2013 Nobel Prize was awarded jointly to three scientists who have solved the mystery of how the cell organizes its transport system. Each cell is a factory that produces and exports molecules. For instance, insulin is manufactured and released into the blood and signaling molecules called neurotransmitters are sent from one nerve cell to another. These molecules are transported around the cell in small packages called vesicles. The three Nobel Laureates have discovered the molecular principles that govern how this cargo is delivered to the right place at the right time in the cell.

Randy Schekman discovered a set of genes that were required for vesicle traffic. James Rothman unravelled protein machinery that allows vesicles to fuse with their targets to permit transfer of cargo. Thomas Südhof revealed how signals instruct vesicles to release their cargo with precision.

Through their discoveries, Rothman, Schekman and Südhof have revealed the exquisitely precise control system for the transport and delivery of cellular cargo. Disturbances in this system have deleterious effects and contribute to conditions such as neurological diseases, diabetes, and immunological disorders.

How cargo is transported in the cell

In a large and busy port, systems are required to ensure that the correct cargo is shipped to the correct destination at the right time. The cell, with its different compartments called organelles, faces a similar problem: cells produce molecules such as hormones, neurotransmitters, cytokines and enzymes that have to be delivered to other places inside the cell, or exported out of the cell, at exactly the right moment. Timing and location are everything. Miniature bubble-like vesicles, surrounded by membranes, shuttle the cargo between organelles or fuse with the outer membrane of the cell and release their cargo to the outside. This is of major importance, as it triggers nerve activation in the case of transmitter substances, or controls metabolism in the case of hormones. How do these vesicles know where and when to deliver their cargo?

Traffic congestion reveals genetic controllers

Randy Schekman was fascinated by how the cell organizes its transport system and in the 1970s decided to study its genetic basis by using yeast as a model system. In a genetic screen, he identified yeast cells with defective transport machinery, giving rise to a situation resembling a poorly planned public transport system. Vesicles piled up in certain parts of the cell. He found that the cause of this congestion was genetic and went on to identify the mutated genes. Schekman identified three classes of genes that control different facets of the cell´s transport system, thereby providing new insights into the tightly regulated machinery that mediates vesicle transport in the cell.

Docking with precision

James Rothman was also intrigued by the nature of the cell´s transport system. When studying vesicle transport in mammalian cells in the 1980s and 1990s, Rothman discovered that a protein complex enables vesicles to dock and fuse with their target membranes. In the fusion process, proteins on the vesicles and target membranes bind to each other like the two sides of a zipper. The fact that there are many such proteins and that they bind only in specific combinations ensures that cargo is delivered to a precise location. The same principle operates inside the cell and when a vesicle binds to the cell´s outer membrane to release its contents.

It turned out that some of the genes Schekman had discovered in yeast coded for proteins corresponding to those Rothman identified in mammals, revealing an ancient evolutionary origin of the transport system. Collectively, they mapped critical components of the cell´s transport machinery.

Timing is everything

Thomas Südhof was interested in how nerve cells communicate with one another in the brain. The signalling molecules, neurotransmitters, are released from vesicles that fuse with the outer membrane of nerve cells by using the machinery discovered by Rothman and Schekman. But these vesicles are only allowed to release their contents when the nerve cell signals to its neighbours. How is this release controlled in such a precise manner? Calcium ions were known to be involved in this process and in the 1990s, Südhof searched for calcium sensitive proteins in nerve cells. He identified molecular machinery that responds to an influx of calcium ions and directs neighbour proteins rapidly to bind vesicles to the outer membrane of the nerve cell. The zipper opens up and signal substances are released. Südhof´s discovery explained how temporal precision is achieved and how vesicles´ contents can be released on command.

Vesicle transport gives insight into disease processes

The three Nobel Laureates have discovered a fundamental process in cell physiology. These discoveries have had a major impact on our understanding of how cargo is delivered with timing and precision within and outside the cell. Vesicle transport and fusion operate, with the same general principles, in organisms as different as yeast and man. The system is critical for a variety of physiological processes in which vesicle fusion must be controlled, ranging from signalling in the brain to release of hormones and immune cytokines. Defective vesicle transport occurs in a variety of diseases including a number of neurological and immunological disorders, as well as in diabetes. Without this wonderfully precise organization, the cell would lapse into chaos.

James E. Rothman was born 1950 in Haverhill, Massachusetts, USA. He received his PhD from Harvard Medical School in 1976, was a postdoctoral fellow at Massachusetts Institute of Technology, and moved in 1978 to Stanford University in California, where he started his research on the vesicles of the cell. Rothman has also worked at Princeton University, Memorial Sloan-Kettering Cancer Institute and Columbia University. In 2008, he joined the faculty of Yale University in New Haven, Connecticut, USA, where he is currently Professor and Chairman in the Department of Cell Biology.

Randy W. Schekman was born 1948 in St Paul, Minnesota, USA, studied at the University of California in Los Angeles and at Stanford University, where he obtained his PhD in 1974 under the supervision of Arthur Kornberg (Nobel Prize 1959) and in the same department that Rothman joined a few years later. In 1976, Schekman joined the faculty of the University of California at Berkeley, where he is currently Professor in the Department of Molecular and Cell biology. Schekman is also an investigator of Howard Hughes Medical Institute.

Thomas C. Südhof was born in 1955 in Göttingen, Germany. He studied at the Georg-August-Universität in Göttingen, where he received an MD in 1982 and a Doctorate in neurochemistry the same year. In 1983, he moved to the University of Texas Southwestern Medical Center in Dallas, Texas, USA, as a postdoctoral fellow with Michael Brown and Joseph Goldstein (who shared the 1985 Nobel Prize in Physiology or Medicine). Südhof became an investigator of Howard Hughes Medical Institute in 1991 and was appointed Professor of Molecular and Cellular Physiology at Stanford University in 2008.

Nobel Prize in Literature for 2013

The Nobel Prize in Literature 2013 was awarded to Alice Munro"master of the contemporary short story".

Click Image To Enlarge

Alice Ann Munro (néeLaidlaw); was born in 1931 and is a Canadian author writing in English. Munro's work has been described as having revolutionized the architecture of short stories, especially in its tendency to move forward and backward in time. Munro's fiction is most often set in her native Huron County in southwstern Ontario. Her stories explore human complexities in an uncomplicated prose style. Munro's writing has established her as "one of our greatest contemporary writers of fiction," or, as Cynthia Ozick put it, "our Chekhov." Alice Munro was awarded the 2013 Nobel Prize in Literature for her work as "master of the modern short story", and the 2009 Man Booker International Price for her lifetime body of work, she is also a three-time winner of Canada's Governor General's Award for fiction.

There is no way to predict the price of stocks and bonds over the next few days or weeks. But it is quite possible to foresee the broad course of these prices over longer periods, such as the next three to five years. These findings, which might seem both surprising and contradictory, were made and analyzed by this year’s Laureates, Eugene Fama, Lars Peter Hansen and Robert Shiller.

Beginning in the 1960s, Eugene Fama and several collaborators demonstrated that stock prices are extremely difficult to predict in the short run, and that new information is very quickly incorporated into prices. These findings not only had a profound impact on subsequent research but also changed market practice. The emergence of so-called index funds in stock markets all over the world is a prominent example.

If prices are nearly impossible to predict over days or weeks, then shouldn’t they be even harder to predict over several years? The answer is no, as Robert Shiller discovered in the early 1980s. He found that stock prices fluctuate much more than corporate dividends, and that the ratio of prices to dividends tends to fall when it is high, and to increase when it is low. This pattern holds not only for stocks, but also for bonds and other assets.

One approach interprets these findings in terms of the response by rational investors to uncertainty in prices. High future returns are then viewed as compensation for holding risky assets during unusually risky times. Lars Peter Hansen developed a statistical method that is particularly well suited to testing rational theories of asset pricing. Using this method, Hansen and other researchers have found that modifications of these theories go a long way toward explaining asset prices.

Another approach focuses on departures from rational investor behavior. So-called behavioral finance takes into account institutional restrictions, such as borrowing limits, which prevent smart investors from trading against any mispricing in the market.

The Laureates have laid the foundation for the current understanding of asset prices. It relies in part on fluctuations in risk and risk attitudes, and in part on behavioral biases and market frictions.

Eugene Francis "Gene" Fama (/ˈfɑːmə/) was born in 1939 and is an American economist and Nobel laureate in Economics, known for his work on portfolio theory and asset pricing, both theoretical and empirical.

He is currently Robert R. McCormick Distinguished Service Professor of Finance at the University of Chicago Booth School of Business. In 2013 it was announced that he would be awarded the Nobel Prize in Economic Sciences jointly with Robert Shiller and Lars Peter Hansen.

Lars Peter Hansen was born in `1952 and is the David Rockefeller Distinguished Service Professor of economics at the University of Chicago. Best known for his work on the Generalize Method of Moments, he is also a distinguished macroeconomist, focusing on the linkages between the financial and real sectors of the economy. In 2013, it was announced that he would be awarded the Nobel Memorial Prize in Economics, jointly with Robert J. Shiller and Eugene Fama.

Robert James "Bob" Shiller was born in 1946 and is an American economist, academic, and best-selling author. He currently serves as a Sterling Professor of Economics at Yale University and is a fellow at the Yale School of Management's International Center for Finance. Shiller has been a research associate of the National Bureau of Economic Research (NBER) since 1980, was Vice President of the American Economic Association in 2005, and President of the Eastern Economic Association for 2006-2007. He is also the co‑founder and chief economist of the investment management firm MacroMarkets LLC. Shiller is ranked among the 100 most influential economists of the world. On 14 October 2013, it was announced that Shiller, together with Eugene Fama and Lars Peter Hansen, would receive the 2013 Nobel Prize in Economics, “for their empirical analysis of asset prices”.

Nobel Prize For Peace 2013

The Nobel Peace Prize 2013 was awarded to Organisation for the Prohibition of Chemical Weapons "for its extensive efforts to eliminate chemical weapons".

The Norwegian Nobel Committee has decided that the Nobel Peace Prize for 2013 is to be awarded to the Organisation for the Prohibition of Chemical Weapons (OPCW) for its extensive efforts to eliminate chemical weapons.

During World War One, chemical weapons were used to a considerable degree. The Geneva Convention of 1925 prohibited the use, but not the production or storage, of chemical weapons. During World War Two, chemical means were employed in Hitler’s mass exterminations. Chemical weapons have subsequently been put to use on numerous occasions by both states and terrorists. In 1992-93 a convention was drawn up prohibiting also the production and storage of such weapons. It came into force in 1997. Since then the OPCW has, through inspections, destruction and by other means, sought the implementation of the convention. 189 states have acceded to the convention to date.

The conventions and the work of the OPCW have defined the use of chemical weapons as a taboo under international law. Recent events in Syria, where chemical weapons have again been put to use, have underlined the need to enhance the efforts to do away with such weapons. Some states are still not members of the OPCW. Certain states have not observed the deadline, which was April 2012, for destroying their chemical weapons. This applies especially to the USA and Russia.

Disarmament figures prominently in Alfred Nobel’s will. The Norwegian Nobel Committee has through numerous prizes underlined the need to do away with nuclear weapons. By means of the present award to the OPCW, the Committee is seeking to contribute to the elimination of chemical weapons.

COMMENTARY: Congratulations to all recipients. The 2013 Nobel laureates include six Americans. Here's a YouTube video of the Nobel Prize Ceremony:

"Mankind is supposedly the most highly developed species on the planet, yet is surprisingly unsuited and ill-equipped for Earth's environment: harmed by sunlight, a strong dislike for naturally occurring foods, ridiculously high rates of chronic disease, and more."

Dr Ellis says that humans might suffer from bad backs because they evolved on a world with lower gravity.

He also says that it is strange that babies’ heads are so large and make it difficult for women to give birth, which can result in fatalities of the mother and infant.

Dr Ellis says that humans might suffer from bad backs (illustrated) because they evolved on a world with lower gravity. He also says that it is strange that babies' heads are so large and make it difficult for women to give birth, which resulted in fatalities in earlier times. (Click Image To Enlarge)

No other native species on this planet has this problem, he says.

He also believes humans are not designed to be as exposed to the sun as they are on Earth, as they cannot sunbathe for more than a week or two – unlike a lizard – and cannot be exposed to the sun every day without problems.

Dr Ellis also claims humans are always ill and this might be because our body clocks have evolved to expects a 25 hour day, as proven by sleep researchers.

He says.

"This is not a modern condition; the same factors can be traced all the way back through mankind's history on Earth."

He suggests that Neanderthals such as homo erectus were crossbred with another species, perhaps from Alpha Centauri, which is the closest star system to our solar system, some 4.37 light years away from the sun.

He also believes humans are not designed to be so exposed to the sun as they are on Earth, as they cannot sunbathe for more than a week or two, unlike a lizard, and cannot be exposed to the sun every day. (Click Image To Enlarge)

Dr Ellis said many people feel that they don’t belong and feel at home on Earth. He said.

"This suggests (to me at least) that mankind may have evolved on a different planet, and we may have been brought here as a highly developed species. One reason for this … is that the Earth might be a prison planet, since we seem to be a naturally violent species and we're here until we learn to behave ourselves."

Dr Ellis said the book is intended to create debate, instead of being a scientific study and hopes it will lead to people getting in touch with him with further suggestions of 'evidence'.

While other scientists have said some bacteria arrived on Earth from space, Chris McKay, an astrobiologist at NASA, said that to jump to the conclusion that it is alien life is "a big jump".

Was this home? Dr Ellis suggests Neanderthals such as homo erectus were crossbred with another species, perhaps from Alpha Centauri. Star Proxima Centauri is pictured in the star system, which is the closest to our solar system some 4.37 light years away from the sun. (Click Image To Enlarge)

Professor Wainwright from the University of Sheffield plans to investigate further, and believes that life is constantly arriving from space that did not originate on Earth.

Dr Ellis says that while his idea is an extreme evolution of that idea, it is intended to be thought-provoking and he claims to have had a largely positive response to it.

He is interested in whether humans came to Earth separately, perhaps by arriving on meteors and comets, before evolving into the species we know today.

He says.

"My thesis proposes that mankind did not evolve from that particular strain of life, but evolved elsewhere and was transported to Earth (as fully evolved Homo sapiens) between 60,000 and 200,000 years ago."

COMMENTARY: For some time now, I have come to the conclusion that modern man (a.k.a. homo sapiens), was engineered from the DNA of pre-historic Cromagnon Man, the species of humans that preceded Homo Sapiens.

I also believe that Earth was been visited by intelligent extra-terrestrial beings, many thousands of years ago, and they have been using our planet as a laboratory for their DNA experiments on both human and animal species. The Earth in effect has become a sort of labratory for DNA experimentation on a planetary scale.

These alient DNA experiments may account for the physical differences between humans of different continents. Each continental species was engineered so that they could cope with their local environments. This may help explain why humans from the continent of Africa, Latin and South America are generally darker skinned and have darker hair due to the hotter climates and abundance of sun in these geographic locations Europeans and North Americans, on the other hand, tend to be lighter skinned and have lighter hair because they tend to have less sun and the climates tend to experience both cold and warm periods throughout the year.

Personally, I believe that humans experience back problems and other skeletal and neurological illnesses or disorders because of poor posture, lack of exercise and improper nutrition and diet. African humans tend to be known for their physical prowess and ability to excell in physical sports. They tend to hunt for all of their food. On the other hand, humans in more advanced cultures tend to overwork, do not exercise sufficiently, sit behind a desk or are couch potatoes who watch too much TV, their diets are full of processed foods with too much fat, sugar and salt, that make us fat and put more weight on our skeletal structures, especially our spine. The result is a series of back ailements.

Another story that I keep reading about is that extra-terrestrials are using our DNA to bolster their own dying species. These aliens have created human-alien hybrids, with horrific stories of underground labs where these experiments are conducted.

Whereever the answer may be or lie, human beings are genetically different from extra-terrestrial beings. It is thought that aliens have taken an interest in humans because unlike aliens, we have a soul and are able to exist in multi-dimensional levels, but have yet to master this ability.

On the other hand, as Dr. Silver suggests in his book, perhaps we are the "castoffs" or "rejects" from other galaxies. The "defective" human beings who can't seem to control their emotions, full of evil and greed, with war-like tendencies, and this make us too dangerous to interact with at this time. If I were an alien and studied our species, I would wonder deep and hard why we keep hating each other and kill each other. The 20th century had five major war, and 100 minor wars between countries, and hundreds of millions of innocent human beings were killed by fanatics.

I know all of this sounds like mumbo-jumbo, or the ramblings of a deranged mind, but I just love to theorize where we came from, and why the aliens are here, and are unable to control us from killing ourselves. Perhaps they prefer to have us kill ourselves, than having to do it themselves. Afterall, they are technologically superiod in every possible way, and I am sure have the means to destroy our planet, and every living thing on it, but maybe, just maybe, they these aliens have evolved beyond hate, greed, and war, and have embrace peace and love. Something we have yet to master.

Courtesy of an article dated November 14, 2013 appearing in The Daily Mail and an article dated September 30, 2013 appearing in Yahoo News