This is a repository for all cool scientific discussion and fascination. Scientific facts, theories, and overall cool scientific stuff that you'd like to share with others. Stuff that makes you smile and wonder at the amazing shit going on around us, that most people don't notice.

The Universe is a wee bit older than we thought. Not only that, but turns out the ingredients are a little bit different, too. And not only that, but the way they’re mixed isn’t quite what we expected, either. And not only that, but there are hints and whispers of something much grander going on as well.
So what’s going on?

The European Space Agency’s Planck mission is what’s going on. Planck has been scanning the entire sky, over and over, peering at the radio and microwaves pouring out of the Universe. Some of this light comes from stars, some from cold clumps of dust, some from exploding stars and galaxies. But a portion of it comes from farther away…much farther away. Billions of light years, in fact, all the way from the edge of the observable Universe.

This light was first emitted when the Universe was very young, about 380,000 years old. It was blindingly bright, but in its eons-long travel to us has dimmed and reddened. Fighting the expansion of the Universe itself, the light has had its wavelength stretched out until it gets to us in the form of microwaves. Planck gathered that light for over 15 months, using instruments far more sensitive than ever before.

The light from the early Universe shows it’s not smooth. If you crank the contrast way up you see slightly brighter and slightly dimmer spots. These correspond to changes in temperature of the Universe on a scale of 1 part in 100,000. That’s incredibly small, but has profound implications. We think those fluctuations were imprinted on the Universe when it was only a trillionth of a trillionth of a second old, and they grew with the Universe as it expanded. They were also the seeds of the galaxies and the clusters and galaxies we see today.

What started out as quantum fluctuations when the Universe was smaller than a proton have now grown to be the largest structures in the cosmos, hundreds of millions of light years across. Let that settle in your brain a moment.

And those fluctuations are the key to Planck’s observations. By looking at those small changes in light we can find out a lot about the Universe. Scientists spent years looking at the Planck data, analyzing it. And what they found is pretty amazing:

Gene therapy has rid three adult patients of acute leukemia. The patients have been cancer-free for 5 months to 2 years, according to a study published last week (March 20) in Science Translational Medicine. Two other patients received the therapy, but one died for reasons believed to be unrelated, and the second died after relapsing.

“We had hoped, but couldn’t have predicted that the response would be so profound and rapid,” Renier Brentjens, lead author of the paper and an oncologist at Memorial Sloan-Kettering Cancer Center, told The New York Times.

The patients all had B-cell acute lymphoblastic leukemia and had relapsed following chemotherapy. The outlook for patients in this category is typically bleak.

The researchers filtered the patients’ blood for T-cells and engineered them with a virus carrying genetic material that would make them recognize CD19, a protein expressed on the surfaces of B-cells. When put back into the patients, the T-cells were meant to attack the B-cells, whether cancerous or normal. The patients experienced unpleasant and dangerous immune reactions, but four of them, including the one that eventually died of an unrelated blood clot, went into remission.

The four patients who went into remission also received bone marrow transplants following the therapy, although it is unclear whether the transplants contributed to their recovery.

This is the first time T-cell therapy has been used successfully to treat adults with acute lymphoblastic leukemia. Another treatment based on training T cells to attack cancerous cells is being developed at the University of Pennsylvania and is being used to treat childhood leukemia and chronic leukemia in adults.

The Sloan-Kettering approach for treating B-cell acute lymphoblastic leukemia will be tested in a second trial of 50 patients, New Scientist reported. The same idea could also be used to treat other cancers.

“Although it's early days for these trials, the approach of modifying a patient's T-cells to attack their cancer is looking increasingly like one that will, in time, have a place alongside more traditional treatments," Paul Moss, a cancer researcher at the University of Birmingham, told New Scientist.

Q: Since pi is infinite, do its digits contain all finite sequences of numbers?

Mathematician: As it turns out, mathematicians do not yet know whether the digits of pi contains every single finite sequence of numbers. That being said, many mathematicians suspect that this is the case, which would imply not only that the digits of pi contain any number that you can think of, but also that they contains a binary representation of britney spears’ DNA, as well as a jpeg encoded image of you making out with a polar bear. Unfortunately, to this day it has not even been proven whether every single digit from 0 to 9 occurs an unlimited number of times in pi’s decimal representation (so, after some point, pi might only contain the digits 0 and 1, for example). On the other hand, since pi is an irrational number, we do know that its digits never terminate, and it does not contain an infinitely repeating sequence (like 12341234123412341234…).

One thing to note is that when mathematicians study the first trillion or so digits of pi on a computer, they find that the digits appear to be statistically random in the sense that the probability of each digit occurring appears to be independent of what digits came just before it. Furthermore, each digit (0 through 9) appears to occur essentially one tenth of the time, as would be expected if the digits had been generated uniformly at random.

While tests performed on samples can never unequivocally prove that a sequence is random (in fact, we know the digits of pi are not random, since we know formulas to generate them) the apparent randomness in pi is consistent with the idea that it contains all finite sequences (or, at least, all fairly short ones). In particular, if we generate a number from an infinite stream of digits selected uniformly at random, then there is a probability of 100% that such a number contains each and every finite sequences of digits, and pi has the appearance of being statistically random.

The following rather remarkable website allows you to search the digits of pi for specific integer sequences:

This is an example of a phenomenon known as pareidolia, the human tendency to read significance into random or vague stimuli (both visual and auditory). The term comes from the Greek words "para" (παρά), meaning beside or beyond, and "eidolon" (εἴδωλον), meaning form or image. Though animals or plants can "appear" in clouds and human speech can do the same in static noise, the appearance of a face where there is none is perhaps the most common variant of pareidolia (this includes the subgenre of spotting Jesus or Mary in anything from toast to a crab).

Pareidolia was once thought of as a symptom of psychosis, but is now recognized as a normal, human tendency. Carl Sagan theorized that hyper facial perception stems from an evolutionary need to recognize -- often quickly -- faces. He wrote in his 1995 book, The Demon-Haunted World, "As soon as the infant can see, it recognizes faces, and we now know that this skill is hardwired in our brains. Those infants who a million years ago were unable to recognize a face smiled back less, were less likely to win the hearts of their parents, and less likely to prosper."

Humans are not alone in their quest to "see" human faces in the sea of visual cues that surrounds them. For decades, scientists have been training computers to do the same. And, like humans, computers display pareidolia.

Though there is something basely human about the tendency to see faces in the non-human shapes around us, to anthropomorphize odd pieces of hardware or rocks on a hillside, that computers see humans where there are none should not be all too surprising. Facial-recognition software is a tough technological feat, and in the process, computers are bound to come up with false positives. Does this make the computers more like us? Have they taken on our most human cognitive errors? In a superficial sense, yes, computers do make errors that are similar to pareidolia, and this seems very human. But as you look into these computer false-positives a bit more, you find a different story.

In an awesome little creative trick, New York University researcher Greg Borenstein applied open-source software FaceTracker to a Flickr pool of examples called Hello Little Fella. In some instances, FaceTracker found a face just where you or I would:

Like a human, the computer has found a false-positive. That humans and computers share some instances of pareidolia seems to underscore the human-like nature of those computers, brought about by their human-led training. In that sense, a computers' errors make the computers seem somehow more human.

But maybe the reason a computer "sees" a face in that key is very simple: Things around us do sometimes actually have the shapes that constitute a face. How can we say this is pareidolia, a strange phenomenon that is supposedly the byproduct of millions of years of evolution, and not just the basic truth that sometimes shapes do look like things they are not?

A project from Phil McCarthy called Pareidoloop pushes us to think about these questions. By combining random-polygon-generation software and facial-recognition software, McCarthy's program builds its own series of randomly generated faces. Out of layers upon layers of mish-mashed shapes, the software "recognizes" the faces, and the fine tunes them into human likenesses. (McCarthy notes that a lot of them kind of resemble old pictures of Einstein.)

The computer is "seeing" faces where there are just random shapes! But wouldn't anyone? The results are clearly faces, so much so that recognizing them as such cannot be labeled pareidolia any more so than recognizing faces in a painting of a face is pareidolia. Where is that line? If it's pareidolia to see a face in the two windows and door of a house, why not in a sketch of two eyes and a nose? Faces are, after all, just a series of well arranged polygons. We'll see them in the world around us because sometimes, inevitably, shapes will be arranged in the formation of two eyes, a nose, and a mouth. How can we identify pareidolia in a way that is distinct from the "accurate" identification of an artistic representation of a face? How can we say pareidolia is a phenomenon of the human mind at all?

Borenstein's work with computers provides a way out of this, answering a most human question by looking at the idiosyncrasies of algorithms. He writes:

Facial recognition techniques give computers their own flavor of pareidolia. In addition to responding to actual human faces, facial recognition systems, just like the human vision system, sometimes produce false positives, latching onto some set of features in the image as matching their model of a face. Rather than the millions of years of evolution that shapes human vision, their pareidolia is based on the details of their algorithms and the vicissitudes of the training data they've been exposed to.

Their pareidolia is different from ours. Different things trigger it.

In Borenstein's sample, FaceTracker found faces in only seven percent of the images, meaning that even though the program did display this human tendency, it did so at a rate much lower than the human judges who created the Flickr pool. That said, we do not know how many false positives the program would spot in the world around us that humans didn't include in the pool, though we get a sense from the "mistakes" the program made, sometimes missing the obvious "face" and spotting another. Such mistakes are useful for seeing just how particularly human pareidolia is in the first place. Here's an example:

The computer's false-positive is, as any human could tell you, wrong -- the wrong wrong answer, selecting B where a human would say A, and the answer is actually D, for none of the above. The mistakes of a computer are so other, so less-than-human, that we can see that pareidolia is not the recognition of just any old assemblage of eyes, nose, and a mouth, but specific ones, ones that must come from within the human observer, that are not inherently available in the shapes as they appear in the world.

And it shows us something more. Although a computer may, like a human, find false positives in the world around it, its sensibility for what makes a set of polygons a face is still, somehow, off. On its surface, a computer's a tendency to pareidolia, this very human phenomenon, seems human-like. In a strange echo of the tendency to see human faces in random shapes, we see our reflection in a machine's cognition -- a sort of pareidolia of the mind. We look at a computer's pareidolia and think, We make those very same mistakes!

But, in fact, we don't. The mistakes are different. A computer's flaws are still very machine -- and ours are very human.

In the past few days, the Internet has been filled with commentary on whether the National Science Foundation should have paid for my study on duck genitalia, and 88.7 percent of respondents to a Fox news online poll agreed that studying duck genitalia is wasteful government spending. The commentary supporting and decrying the study continues to grow. As the lead investigator in this research, I would like to weigh in on the controversy and offer some insights into the process of research funding by the NSF.

My research on bird genitalia was originally funded in 2005, during the Bush administration. Thus federal support for this research cannot be connected exclusively to sequestration or the Obama presidency, as many of the conservative websites have claimed.

Since Sen. William Proxmire's Golden Fleece awards in the 1970s and 1980s, basic science projects are periodically singled out by people with political agendas to highlight how government “wastes” taxpayer money on seemingly foolish research. These arguments misrepresent the distinction between and the roles of basic and applied science. Basic science is not aimed at solving an immediate practical problem. Basic science is an integral part of scientific progress, but individual projects may sound meaningless when taken out of context. Basic science often ends up solving problems anyway, but it is just not designed for this purpose. Applied science builds upon basic science, so they are inextricably linked. As an example, Geckskin™ is a new adhesive product with myriad applications developed by my colleagues at the University of Massachusetts. Their work is based on several decades of basic research on gecko locomotion.

Whether the government should fund basic research in times of economic crisis is a valid question that deserves well-informed discourse comparing all governmental expenses. As a scientist, my view is that supporting basic and applied research is essential to keep the United States ahead in the global economy. The government cannot afford not to make that investment. In fact, I argue that research spending should increase dramatically for the United States to continue to lead the world in scientific discovery. Investment in the NSF is just over $20 per year per person, while it takes upward of $2,000 per year per person to fund the military. Basic research has to be funded by the government rather than private investors because there are no immediate profits to be derived from it.

Because the NSF budget is so small, and because we have so many well-qualified scientists in need of funds, competition to obtain grants is fierce, and funding rates at the time this research was funded had fallen well below 10 percent. Congress decides the total amount of money that the NSF gets from the budget, but it does not decide which individual projects are funded—and neither does the president or his administration. Funding decisions are made by panels of scientists who are experts in the field and based on peer review by outsiders, often the competitors of the scientists who submitted the proposal. The review panel ranks proposals on their intellectual merits and impacts to society before making a recommendation. This recommendation is then acted upon by program officers and other administrators, who are also scientists, at the NSF.

This brings us back to the ducks. Male ducks force copulations on females, and males and females are engaged in a genital arms race with surprising consequences. Male ducks have elaborate corkscrew-shaped penises, the length of which correlates with the degree of forced copulation males impose on female ducks. Females are often unable to escape male coercion, but they have evolved vaginal morphology that makes it difficult for males to inseminate females close to the sites of fertilization and sperm storage. Males have counterclockwise spiraling penises, while females have clockwise spiraling vaginas and blind pockets that prevent full eversion of the male penis.

Our latest study examined how the presence of other males influences genital morphology. My colleagues and I found that it does so to an amazing degree, demonstrating that male competition is a driving force behind these male traits that can be harmful to females. The fact that this grant was funded, after the careful scrutiny of many scientists and NSF administrators, reflects the fact that this research is grounded in solid theory and that the project was viewed as having the potential to move science forward (and it has), as well as fascinate and engage the public. The research has been reported on positively by hundreds of news sites in recent years, even Fox news. Most of the grant money was spent on salaries, putting money back into the economy.

The commentary and headlines in some of the recent articles reflect outrage that the study was about duck genitals, as if there is something inherently wrong or perverse with this line of research. Imagine if medical research drew the line at the belt! Genitalia, dear readers, are where the rubber meets the road, evolutionarily. To fully understand why some individuals are more successful than others during reproduction, there may be no better place to look. The importance of evolutionary research on other species’ genitalia to the medical field has been recently highlighted in the Journal of Sexual Medicine. Generating new knowledge of what factors affect genital morphology in ducks, one of the few vertebrate species other than humans that form pair bonds and exhibit violent sexual coercion, may have significant applied uses in the future, but we must conduct the basic research first. In the meantime, while we engage in productive and respectful discussion of how we envision the future of our nation, why not marvel at how evolution has resulted in such counterintuitive morphology and bizarre animal behavior.