Archive for September 28th, 2011

There seems to be a good deal of activity in quantum computing implementations using qubits based on spin states of one or two electrons. Alternative implementations involve photons, ions, or several other possibilities. So far, spin qubit implementations have been lagging somewhat. This is even though there are advantages of using spin qubits, since the implementation can involve solid-state semiconductor technology, which of course is now very sophisticated.

New research now seems to be advancing spin qubit technology. Among the earlier challenges were reading out single qubit states and addressing individual qubits among others in a group. Solutions to these issues has made possible single and double qubit operations. So the next stage is being able to deal with more than two spin qubits at a time in a solid-state system.

Over the last decade, the experimental emphasis for spin qubits has been on demonstrating the required criteria for a viable scheme for quantum computing. This has been driven primarily by groups at Harvard and Delft universities. The next stage is to go to higher numbers of coupled qubits and demonstrate more complex quantum gate operations and algorithms. The paper by Brunner et al. is a necessary step forward. It is clear that the spin qubit system currently lags behind other quantum computer implementation schemes. Solid-state based schemes, especially semiconductor ones, have always held the promise, however, that the enormous progress from decades of device integration technology development could one day lead to scalability not feasible with other schemes. To achieve this, however, we need parallel work on spin qubits in different materials to optimize coherence times, device designs, and architectures and to explore hybrid technology based on exploiting the most useful properties of different schemes.

News of new extrasolar planet discoveries keeps coming, fast and furious. It’s hard to keep track of the latest. There are some very strange planets out there. But what really interests most people, it seems, is how many planets out there are like Earth. Apparently there’s this deep-seated desire to to find places in the universe that seem familiar.

Unfortunately, that’s not easy. In the first place, Earth is a relatively small planet, as are most rocky planets, in contrast to larger “gas giant” planets. It is these small, rocky planets that should be most like Earth, especially in terms of allowing for “life as we know it”. Although there are several different methods of detecting extrasolar planets, they’re all more likely to notice larger planets than smaller ones.

Secondly, only planets that orbit in a narrow range of distances from their host star are neither too hot nor too cold to permit the existence of liquid water and a decent atmosphere on the planet’s surface. The size of the range depends on the size and (relatedly) the brightness of the star. For stars that are reasonably close in size to our own, it’s easier to detect any kind of planet closer to the star than farther out. Usually too close to be in the habitable zone.

For these two reasons alone, present extrasolar planet searches are likely to seriously undercount Earthlike planets in a star’s habitable zone. However, by analyzing the data already accumulated on extrasolar planets around stars similar in size to the Sun, and making a reasonable assumption, the data can be extrapolated to suggest roughly how many planets similar to Earth are out there, even though we can’t yet detect most of them.

The assumption is that the pattern of existence and location for smaller planets is similar to the pattern for larger planets we can detect. With that assumption, the conclusion stated in the title here is at least plausible.

What interests most astronomers is how many exoplanets orbit at a greater distance, inside the habitable zone. Most of these planets are too far away from their stars to have been picked up by Kepler yet. But Traub says his data analysis provides a way to work out how many their ought to be.

That’s because he’s found a power law that describes how the number of stars with a given orbital period. So all he has to do is assume a longer orbital period equivalent to being in the habitable zone to work out how many planets there ought to be at this distance.

Here’s the answer: “About one-third of FGK stars are predicted to have at least one terrestrial, habitable-zone planet,” he says.

It appears that a certain class of drugs (beta-blockers) used to control high blood pressure may also be helpful for people with some cancers, at least by slowing the progression of the disease.

How does this happen? It seems that earlier studies had shown that a couple of stress hormones – epinephrine and norepinephrine – bind to certain tumor cell receptors. When that happens, the cell is stimulated to produce vascular endothelial growth factor and two immune system interleukins. The result is an enhancement of blood supply to the tumor, thus promoting growth and metastasis.

But beta-blockers block the receptors, and hence inhibit effects of the hormones. Theoretically this should inhibit tumor development. In order to test this hypothesis, a large database of Danish cancer patient records was examined. It was found that melanoma patients who were also taking beta-blockers had their chances of surviving a specified number of years improved by 13%. Not a lot, but a benefit nevertheless. And the value of reducing effects from stress hormones was demonstrated. Perhaps other drugs may have larger effects. Lowering stress levels may help too.

Beta-blocker drugs, commonly used to treat high blood pressure, may also play a major role in slowing the progression of certain serious cancers, based on a new study.

A review of thousands of medical records in the Danish Cancer Registry showed that patients with the skin cancer melanoma, and who also were taking a specific beta-blocker, had much lower mortality rates than did patients not taking the drug.

Just a few days ago there was a story about a new type of “standard candle” for use in measuring very large astronomical distances. Such standard candles have been lacking when the distances to be measured are a bit more than halfway back to the big bang – around 7 or 8 billion light-years. Consequently, it’s very difficult to obtain reasonably precise data about many things that happened in the early universe, such as the rate at which the universe was expanding then. This is a major problem since phenomena such as dark energy are difficult to theorize about without good data.

Now another type of standard candle has been identified, and it also makes it possible to gauge large distances – even to objects whose light was emitted about 1.5 billion years after the big bang, at a distance of 12 billion light-years (redshift z~4).

In theory, distance should be simple to work out. If you know the intrinsic brightness of an object, a simple measure of its apparent brightness will tell you how far away it is (since brightness falls as an inverse square of its distance).

So in astronomy, the problem of distance is intimately linked to the problem of knowing an object’s intrinsic brightness.

But that’s hard. There’s simply no way to tell the intrinsic brightness of most stars and galaxies and so no way to work out their distance.