From Beyond The Rainbow Somewhere

Day: 10/24/2016

Post navigation

Researchers have found measurable brain changes in children after a single season of playing youth football, even without a concussion diagnosis, according to a new study.

MR images of left inferior fronto-occipital fasciculus (top) before and (middle) after the playing season, and (bottom) the overlay. In the overlay (bottom), the red region is after the season and the blue region is before the season.

Researchers have found measurable brain changes in children after a single season of playing youth football, even without a concussion diagnosis, according to a new study published online in the journal Radiology.

According to USA Football, there are approximately 3 million young athletes participating in organized tackle football across the country. Numerous reports have emerged in recent years about the possible risks of brain injury while playing youth sports and the effects it may have on developing brains. However, most of the research has looked at changes in the brain as a result of concussion.

“Most investigators believe that concussions are bad for the brain, but what about the hundreds of head impacts during a season of football that don’t lead to a clinically diagnosed concussion? We wanted to see if cumulative sub-concussive head impacts have any effects on the developing brain,” said the study’s lead author, Christopher T. Whitlow, M.D., Ph.D., M.H.A., associate professor and chief of neuroradiology at Wake Forest School of Medicine in Winston-Salem, N.C.

The research team studied 25 male youth football players between the ages of 8 and 13. Head impact data were recorded using the Head Impact Telemetry System (HITs), which has been used in other studies of high school and collegiate football to assess the frequency and severity of helmet impacts. In this study, HITs data were analyzed to determine the risk weighted cumulative exposure associated with a single season of play.

The study participants underwent pre- and post-season evaluation with multimodal neuroimaging, including diffusion tensor imaging (DTI) of the brain. DTI is an advanced MRI technique, which identifies microstructural changes in the brain’s white matter. In addition, all games and practices were video recorded and reviewed to confirm the accuracy of the impacts.

The brain’s white matter is composed of millions of nerve fibers called axons that act like communication cables connecting various regions of the brain. Diffusion tensor imaging produces a measurement, called fractional anisotropy (FA), of the movement of water molecules in the brain and along axons. In healthy white matter, the direction of water movement is fairly uniform and measures high in FA. When water movement is more random, FA values decrease, which has been associated with brain abnormalities in some studies.

The results showed a significant relationship between head impacts and decreased FA in specific white matter tracts and tract terminals, where white and gray matters meet.

“We found that these young players who experienced more cumulative head impact exposure had more changes in brain white matter, specifically decreased FA, in specific parts of the brain,” Dr. Whitlow said. “These decreases in FA caught our attention, because similar changes in FA have been reported in the setting of mild TBI.”

It is important to note that none of the players had any signs or symptoms of concussion.

“We do not know if there are important functional changes related to these findings, or if these effects will be associated with any negative long-term outcomes,” Dr. Whitlow said. “Football is a physical sport, and players may have many physical changes after a season of play that completely resolve. These changes in the brain may also simply resolve with little consequence. However, more research is needed to understand the meaning of these changes to the long-term health of our youngest athletes.”

Like this:

What was the most dangerous nuclear disaster in world history? Most people would say the Chernobyl nuclear disaster in Ukraine, but they’d be wrong. In 2011, an earthquake, believed to be an aftershock of the 2010 earthquake in Chile, created a tsunami that caused a meltdown at the TEPCO nuclear power plant in Fukushima, Japan. Three nuclear reactors melted down and what happened next was the largest release of radiation into the water in the history of the world. Over the next three months, radioactive chemicals, some in even greater quantities than Chernobyl, leaked into the Pacific Ocean. However, the numbers may actually be much higher as Japanese official estimates have been proven by several scientists to be flawed in recent years.

Radioactive Debris from Fukushima approaching North America’s western coast

If that weren’t bad enough, Fukushima continues to leak an astounding 300 tons of radioactive waste into the Pacific Ocean every day. It will continue do so indefinitely as the source of the leak cannot be sealed as it is inaccessible to both humans and robots due to extremely high temperatures.

It should come as no surprise, then, that Fukushima has contaminated the entire Pacific Ocean in just five years. This could easily be the worst environmental disaster in human history and it is almost never talked about by politicians, establishment scientists, or the news. It is interesting to note that TEPCO is a subsidiary of General Electric (also known as GE), one of the largest companies in the world, which has considerable control over numerous news corporations and politicians alike. Could this possibly explain the lack of news coverage Fukushima has received in the last five years? There is also evidence that GE knew about the poor condition of the Fukushima reactors for decades and did nothing. This led 1,400 Japanese citizens to sue GE for their role in the Fukushima nuclear disaster.

Even if we can’t see the radiation itself, some parts of North America’s western coast have been feeling the effects for years. Not long after Fukushima, fish in Canada began bleeding from their gills, mouths, and eyeballs. This “disease” has been ignored by the government and has decimated native fish populations, including the North Pacific herring. Elsewhere in Western Canada, independent scientists have measured a 300% increase in the level of radiation. According to them, the amount of radiation in the Pacific Ocean is increasing every year. Why is this being ignored by the mainstream media? It might have something to do with the fact that the US and Canadian governments havebanned their citizens from talking about Fukushima so “people don’t panic.”

Further south in Oregon, USA, starfish began losing legs and then disintegrating entirely when Fukushima radiation arrived there in 2013. Now, they are dying in record amounts, putting the entire oceanic ecosystem in that area at risk. However, government officials say Fukushima is not to blame even though radiation in Oregon tuna tripled after Fukushima. In 2014, radiation on California beaches increased by 500 percent. In response, government officials said that the radiation was coming from a mysterious “unknown” source and was nothing to worry about.

However, Fukushima is having a bigger impact than just the West coast of North America. Scientists are now saying that the Pacific Ocean is already radioactive and is currently at least 5-10 times more radioactive than when the US government dropped numerous nuclear bombs in the Pacific during and after World War II. If we don’t start talking about Fukushima soon, we could all be in for a very unpleasant surprise.

Like this:

Maanasa Mendu thinks she’s cracked the code on how to make wind and solar energy affordable.

On Tuesday, Mendu – a 13-year-old from Ohio – won the grand prize in the Discovery Education 3M Young Scientist Challenge for her work in creating a cost-effective ‘solar leaves’ design to create energy. In addition to winning the title of America’s Top Young Scientist, she gets $25,000 for her achievement.

The leaves, designed to help developing areas in need of cheaper power sources, cost roughly US $5 to make.

Over the past three months, Mendu and nine other finalists worked on their projects alongside a mentor provided by 3M.

Mendu was inspired to come up with a cheaper way to produce energy after visiting India, where she saw many people who lacked access to affordable clean water and electricity. Originally, her intent was to harness only wind energy.

Here’s what the product looked like when Mendu entered the competition:

Maanasa Mendu via YouTube

But along the way, Mendu, with the help of her 3M mentor Margauz Mitera, shifted to a different kind of energy collection. Drawing inspiration from how plants function, she decided to focus on creating solar leaves that harnessed vibrational energy.

Here’s how it works: her ‘leaves’ can pick up energy from precipitation, wind, and even sunlight using a solar cell and piezoelectric material (the part of the leaf that picks up on the vibrations). These are then transformed into usable energy.

Here’s what the finished product looked like:

Courtesy 3M

Now that the competition is over, Mendu said she wants to develop the prototype further and conduct more tests so that one day she can make it available commercially.

Like this:

In Beyond Science, Epoch Times explores research and accounts related to phenomena and theories that challenge our current knowledge. We delve into ideas that stimulate the imagination and open up new possibilities. Share your thoughts with us on these sometimes controversial topics in the comments section below.

At birth, the human brain is nothing but an empty storage tank with 30 billion neurons in it. In contrast to your wonderfully choreographed body, with details from toe nails to hair thickness, there is nothing special about this most important vital organ. The brain needs to be filled. It is a process. The process of learning and maturing via various life experiences results in the final description of who you are, and yet it continues to change in time with increasingly smaller amounts and at a slower pace.

Although the terminology sounds original, DNAM is actually not a new concept. For example, tracking and profiling Facebook users based on their “likes” is a rudimentary form of DNAM. Such a thing is perceived by some as a dark enterprise nowadays, due to privacy concerns.

When we depart from this present gloomy picture, and imagine what can happen in the future, the meaning of DNAM changes drastically. If DNA cloning ensures the eternal continuation of your body, then DNAM may ensure the immortality of your mind, in a peculiar and exciting manner. The truthfulness of this statement very much depends on how DNAM will evolve from being just a commercial “profile” to something much more spectacular.

Psychological studies have several, somewhat debatable, human personality theories. Creating a model for DNAM must use something like the Raymond Cattell’s 16 Personality Factors. Marking them on a scale of 1 to 10 (either by measurements or self-determination) shows your behavior, such as reasoning, emotional stability, sensitivity, and other factors (as shown in the blue chart in Fig. 1 below). Mathematically speaking, if we had Steve Jobs’ blue chart, there could be another 20 million people out there with similar charts. As a result, psychological profiling is never unique enough to claim your DNAM.

This classical approach omits the role of a second important element of knowledge, which we call “expertise.” In the same scale of 1 to 10, now we can mark the level of knowledge in various fields (as shown in the green chart in Fig. 1 above). This list could be as long as it needs to be depending on each person. The expertise can be anything ranging from how to boil an egg to how to launch a nuclear missile. The blue chart combined with the green chart could potentially depict a unique DNAM for Steve Jobs or anyone else.

The exClone project has undertaken the digital cloning of human expertise. To make exClones useful to society, the main emphasis is given to the expertise part (the green chart). To ensure their organic potential, exClones continue to learn, following the personality traits of their creators (following the blue charts) by means of social conversations and Internet sources. The project is significant in its comprehensive attempt to model deep artificial intelligence.

The uniqueness of the green chart lies in its identification and prioritization of knowledge. For example, between two dentists who went to the same school, it would be impossible to produce equal expertise in real life. Each would have a different clinical experience over time. This unique experience, combined with the personality traits (blue chart), is what makes up the final definition of our minds and DNAM in this exClone model.

Of course, some may say that distilling the definition of the mind down to a number of personality traits and experiences may not capture the essence of the human mind, it could be useful in a practical sense in developing artificial intelligence.

Should the computers we create have personalities and knowledge prioritization? The short answer is “absolutely, yes.” Differences fill all the gaps and avoid common blind spots. That is the power of group thinking and a cornerstone of human civilization. The future of computerized human societies will be more successful with human-like variety as opposed to a single, “can do all,” generic computer model.

Less than a month ago, Elon Musk shared his audacious plan to launch a million people to Mars and beyond, all in hopes of backing up humanity for when some future apocalyptic calamity dooms Earth.

Musk’s 63-minute presentation showed off intricate computer renderings of rockets and spaceships, all parts of his so-called Interplanetary Transport System (ITS).

Because he skipped over a lot of technical details, however, Musk logged onto Reddit on Sunday afternoon to let his most discerning fans squeeze information from him in an “Ask Me Anything” session.

During that Q&A, we learned about a very important test SpaceX allegedly has planned “in the coming weeks,” says Musk: filling up an enormous carbon-fiber fuel tank, shown below, that will be essential to making the ITS spaceship work.

The tank – which happens to be the spaceship’s core structure around which everything else is built – has to withstand incredible pressures and stresses at blisteringly cold temperatures, otherwise it might leak or even explode.

Musk shared images of the massive structure toward the end of his Sept. 27 talk at the International Astronomical Congress (IAC). When Reddit user nalyd8991 asked Musk for more information about it on Sunday, he said the gargantuan carbon-fiber tank “was really the big news” of his IAC talk – or at least “for those that know their stuff”.

‘The hardest part of the spaceship’

“[T]his is really the hardest part of the spaceship,” Musk said at the IAC. “The other pieces … we have a pretty good handle on, but this was the trickiest one. So we wanted to tackle it first.”

In short, the entire spaceship will be centered around such a tank – so getting it right is crucial. Here’s where it will be placed in the spaceship:

Carbon-fiber fuel tanks for spacecraft aren’t a new concept, since Boeing and NASA began work on a huge ones 2014, but we’ve never seen one this enormous at roughly 40 feet wide.

Here’s what it looks like from the inside:

Engineers likely made the tank out of carbon fiber because that material is lighter, shrinks less, and is stronger than the metal alloys that many cryogenic fuel tanks are made out of. So it will presumably make for safer transport to Mars while using less fuel (though at a considerably higher cost of construction).

The material isn’t easy to work with, though, as Musk explained during his IAC talk:

“Even though carbon fiber has incredible strength-to-weight, when you want one of them put super-cold liquid oxygen and liquid methane – particularly liquid oxygen – in the tank, it’s subject to cracking and leaking and it’s a very difficult thing to make.

“Just the sheer scale of it is also challenging, because you’ve gotta lay out the carbon fiber in exactly the right way on a huge mold, and you’ve gotta cure that mold at temperature, and then it’s… just really hard to make large carbon-fiber structures that can do all of those things and carry incredible loads.”

During his Reddit AMA on Sunday, Musk dropped a few bits of new information about the tank as well:

“[T]he flight tank will actually be slightly longer than the development tank shown, but the same diameter,” Musk said.

He also noted it was “built with latest and greatest carbon fiber prepreg,” or carbon-fiber that’s pre-impregnated with glue-like resin to make it tougher. “In theory, it should hold cryogenic propellant without leaking and without a sealing linker,” he said. “Early tests are promising.”

Finally, he teased a much grander test: “Will take it up to 2/3 of burst pressure on an ocean barge in the coming weeks.”

Although Musk didn’t answer any follow-up questions about this mysterious ocean test, we assume that by “barge,” he means one of the robotic drone ships that have caught a handful of SpaceX’s self-landing Falcon 9 rocket boosters in the recent past, such as the one shown below.

We also don’t yet know how SpaceX plans to pressurize the giant black tank. It could be with plain old air, but it might be their fuel of choice for Mars: flammable (and explosive) methane gas.

Several things might suggest this: First, lugging a giant orb out into the middle of the ocean sounds like no simple effort, so there’s an expectation that the device could explode – not just merely leak. If no one is around and it blows up, its shrapnel (and possibly flames) can’t hurt anyone. Luckily, SpaceX’s ocean barges have proven capable of withstanding extreme blasts in the past:

Second, Musk said SpaceX performed “initial tests with the cryogenic propellant” that “actually look quite positive. We have not seen any leaks or major issues,” he said.

If the company has already pumped methane into the sphere-like tank before, it stands to reason they’ll try again to get even more data on its performance.

Like this:

It’s long been clear that people from different parts of the world differ in their susceptibility to developing infections as well as chronic inflammatory and autoimmune diseases. Now, two studies show that those differences in disease susceptibility can be traced in large part to differences at the genetic level directing the way the immune systems of people with European and African ancestry are put together.

This visual abstract depicts how genetic variants enriched in population specific signals of natural selection and, among Europeans, of Neandertal ancestry play a major role in the differences in transcriptional responses to inflammatory and infectious challenges observed between human populations.

Credit: Quach et al./Cell 2016

It’s long been clear that people from different parts of the world differ in their susceptibility to developing infections as well as chronic inflammatory and autoimmune diseases. Now, two studies reported in Cell on October 20 show that those differences in disease susceptibility can be traced in large part to differences at the genetic level directing the way the immune systems of people with European and African ancestry are put together.

The researchers also found that differences between populations have been selected for over time because they conferred advantages to people facing distinct health challenges in the places where they lived. As a result, according to the new evidence, people of African ancestry generally show stronger immune responses than Europeans do.

The discovery suggests that European populations have been selected to display reduced immune responses since our ancestors first made their way out of Africa. Intriguingly, the immune systems of Europeans were partly shaped by the introduction of new genetic variants through interbreeding between some of our early European ancestors and Neanderthals.

“Our findings show that population differences in transcriptional responses to immune activation are widespread, and that they are mainly accounted for by genetic variants that differ in their frequencies between human populations,” said Lluis Quintana-Murci of Institut Pasteur and CNRS in Paris, France, who led one of the two studies.

“I was expecting to see ancestry-associated differences in immune response but not such a clear trend towards an overall stronger response to infection among individuals of African descent,” added Luis Barreiro of the University of Montreal and the CHU Sainte-Justine in Canada, senior author of the other study.

Quintana-Murci and colleagues used RNA-sequencing to characterize the way that immune cells, known as primary monocytes, derived from 200 people of self-reported African or European ancestry would respond to attack by a bacteria or a virus. The researchers detected many differences in the activity of particular genes in those immune cells both within and between populations. They also discovered that changes in a single gene encoding an important immune receptor lead to decreased inflammation only in Europeans.

The researchers found strong evidence of selection on genes that control the immune response. Their evidence also shows that Europeans “borrowed” some key regulatory variants from Neanderthals, which in particular affect the way their immune systems respond to viral challenges.

Barreiro and colleagues took a similar approach to test for the effects of African versus European ancestry on changes in the activity of immune cells. His group focused on another type of immune cell known as primary macrophages and their response to live bacterial pathogens.

The researchers infected macrophages derived from 80 African and 95 European individuals with either Listeria monocytogenes or Salmonella typhimurium to look for differences in response and related them to ancestry. Their studies identified thousands of genes showing population differences in transcriptional response to infection. They also found that African ancestry is associated with a stronger inflammatory response, which limited the growth of bacteria.

In many cases, the activity of particular genes was tied to a single genetic variant, with strong differences in frequency between European and African populations. The researchers also observed the signature of past selection on those genes and additional evidence for an important role of genetic variants passed on to modern humans from Neanderthals. “This strongly suggests that a diminished inflammatory response has conferred a selective advantage to European populations,”Quintana-Murci said.

“The genetic and molecular basis of ancestry-related differences in disease susceptibility has been a mystery,” Barreiro said. “These results provide a first description of differences in immune response and associated genetic basis that might explain differences in susceptibility to disease between people of African and European ancestry. More generally, our results demonstrate how historical selective events continue to shape human phenotypic diversity today, including for traits that are key to controlling infection.”

The researchers noted that the two studies made strikingly similar findings despite the fact that they focused on different types of immune cells. They say that more work is now needed to better understand the role of environmental and other factors, including epigenetic changes, in the differences they’ve observed.

Like this:

The chemical structure of glutathione, an antioxidant that may help resist the toxins that are an underlying cause of aging.

Researchers at Oregon State University have found that a specific detoxification compound, glutathione, helps resist the toxic stresses of everyday life – but its levels decline with age and this sets the stage for a wide range of age-related health problems.

A new study, published in the journal Redox Biology, also highlighted a compound – N-acetyl-cysteine, or NAC – that is already used in high doses in medical detoxification emergencies. But the researchers said that at much lower levels NAC might help maintain glutathione levels and prevent the routine metabolic declines associated with aging.

In that context, the research not only offers some profound insights into why the health of animals declines with age, but specifically points to a compound that might help prevent some of the toxic processes involved.

Decline of these detoxification pathways, scientists say, are causally linked to cardiovascular disease, diabetes and cancer, some of the primary causes of death in the developed world.

“We’ve known for some time of the importance of glutathione as a strong antioxidant,” said Tory Hagen, lead author on the research and the Helen P. Rumbel Professor for Health Aging Research in the Linus Pauling Institute at OSU.

“What this study pointed out was the way that cells from younger animals are far more resistant to stress than those from older animals,” said Hagen, also a professor of biochemistry in the OSU College of Science. “In young animal cells, stress doesn’t cause such a rapid loss of glutathione. The cells from older animals, on the other hand, were quickly depleted of glutathione and died twice as fast when subjected to stress.

“But pretreatment with NAC increased glutathione levels in the older cells and largely helped offset that level of cell death.”

Glutathione, Hagen said, is such an important antioxidant that its existence appears to date back as far as oxygen-dependent, or aerobic life itself – about 1.5 billion years. It’s a principal compound to detoxify environmental stresses, air pollutants, heavy metals, pharmaceuticals and many other toxic insults.

In this study, scientists tried to identify the resistance to toxins of young cells, compared to those of older cells. They used a toxic compound called menadione to stress the cells, and in the face of that stress the younger cells lost significantly less of their glutathione than older cells did. The glutathione levels of young rat cells never decreased to less than 35 percent of its initial level, whereas in older rat cells glutathione levels plummeted to 10 percent of their original level.

NAC, the researchers said, is known to boost the metabolic function of glutathione and increase its rate of synthesis. It’s already used in emergency medicine to help patients in a toxic crisis, such as ingestion of poisonous levels of heavy metals. It’s believed to be a very safe compound to use even at extremely high levels – and the scientists are hypothesizing that it might have significant value at much lower doses to maintain glutathione levels and improve health.

“I’m optimistic there could be a role for this compound in preventing the increased toxicity we face with aging, as our abilities to deal with toxins decline,” Hagen said. “We might be able to improve the metabolic resilience that we’re naturally losing with age.”

Also of interest, Hagen said, is the wide range of apparent detoxification potential offered by glutathione. Higher levels of it – boosted by NAC – might help reduce the toxicity of some prescription drugs, cancer chemotherapies, and treat other health issues.

“Using NAC as a prophylactic, instead of an intervention, may allow glutathione levels to be maintained for detoxification in older adults,” the researchers wrote in their conclusion.

Like this:

IN BRIEF

Engineers from the University of Cambridge have created an ultra low power transistor that can run for a long time without a power source.

This tech could be used in various sensor interfaces and wearable devices, or in more autonomous electronics that can harness energy from their environments.

SCAVENGING POWER

As electronic devices become more compact and powerful, conventional methods for manufacturing electrical components simply won’t do. The problem lies in the fact that current systems require a huge battery and their components are too bulky.

However, that all could change, as engineers from the University of Cambridge have created an ultra low power transistor that can run for a long time without a power source.

Basically, transistors are semiconductor devices that function like a faucet. Turn a transistor on and the electricity flows, turn it off and the flow stops. When a transistor is off however, some electric current could still flow through, just like a leaky faucet. This current, which is called a near-off-state, was exploited by the engineers to power the new transistors.

These new transistors are able to scavenge power from its surrounding environment allowing a battery to last longer. Dr Sungsik Lee, the paper’s first author, also from the Department of Engineering says, “if we were to draw energy from a typical AA battery based on this design, it would last for a billion years.” The new design could be produced in low temperatures and they are versatile enough to be printed on materials like glass, paper, and plastic.

SMALLER DEVICES

The transistor’s design also utilizes a ‘non-desirable’ characteristic, namely the ‘Schottky barrier’ to create smaller transistors. Transistors today cannot be manufactured into smaller sizes since the smaller a transistor gets, the more its electrodes influence each other, causing a non-functioning transistor. The use of the Schottky barrier in the new design creates seal between the electrodes that make them work independently from each other.

According to Arokia Nathan of Cambridge’s Department of Engineering, the second author of the paper, this new design can see use in various sensor interfaces and wearable devices that require only a low amount of power to run. Professor Gehan Amaratunga, Head of the Electronics, Power and Energy Conversion Group at Cambridge’s Engineering Department sees its use in more autonomous electronics that can harness energy from their environments similar to a bacteria.

Like this:

IN BRIEF

In an attempt to bring the next generation of computers to life, teams around the globe have been working with carbon nanotubes – one of the most conductive materials ever discovered. Now, for the first time ever, scientists made a transistor using carbon nanotubes that beats silicon.

For the first time, scientists have built a transistor out of carbon nanotubes that can run almost twice as fast as its silicon counterparts.

This is big, because for decades, scientists have been trying to figure out how to build the next generation of computers using carbon nanotube components, because their unique properties could form the basis of faster devices that consume way less power.

“Making carbon nanotube transistors that are better than silicon transistors is a big milestone,” said one of the team, Michael Arnold, from the University of Wisconsin-Madison. “This achievement has been a dream of nanotechnology for the last 20 years.”

First developed back in 1991, carbon nanotubes are basically minuscule carbon straws that measure just 1 atom thick.

Imagine a tiny, cylindrical tube that’s approximately 50,000 times smaller than the width of a human hair, and made from carbon atoms arranged in hexagonal arrays. That’s what a carbon nanotube wire would look like if you could see it at an atomic level.

Because of their size, carbon nanotubes can be packed by the millions onto wafers that can act just like a silicon transistor – the electrical switches that together form a computer’s central processing unit (CPU).

Despite being incredibly tiny, carbon nanotubes have some unique properties that make them an engineer’s dream.

And here’s the best part: just like that other 1-atom-thick wonder-material,graphene, carbon nanotubes are one of the most conductive materials ever discovered.

With ultra-strong bonds holding the carbon atoms together in a hexagonal pattern, carbon nanotubes are able to produce a phenomenon known aselectron delocalisation, which allows an electrical charge to move freely through it.

The arrangement of the carbon atoms also allows heat to move steadily through the tube, which gives it around 15 times the thermal conductivity and 1,000 times the current capacity of copper, while maintaining a density that’s just half that of aluminium.

Because of all these amazing properties, these semiconducting powerhouses could be our answer to the rapidly declining potential of silicon-based computers.

Right now, all of our computers are running on silicon processors and memory chips, but we’ve about hit the limit for how fast these can go. If scientists can figure out how to replace silicon-based parts with carbon nanotube parts, in theory, we could bump speeds up by five times instantly.

But there’s a major problem with mass-producing carbon nanotubes – they’re incredibly difficult to isolate from all the small metallic impurities that creep in during the manufacturing process, and these bits and pieces can interrupt their semiconducting properties.

But Arnold and his team have finally figured out how to get rid of almost all of these impurities. “We’ve identified specific conditions in which you can get rid of nearly all metallic nanotubes, where we have less than 0.01 percent metallic nanotubes,” he says.

As Daniel Oberhaus explains for Motherboard, the technique works by controlling the self-assembling properties of carbon nanotubes in a polymer solution, which not only allows the researchers to clean out impurities, but also to manipulate the proper spacing of nanotubes on a wafer.

“The end result are nanotubes with less than 0.01 percent metallic impurities, integrated on a transistor that was able to achieve a current that was 1.9 times higher than the most state-of-the-art silicon transistors in use today,” he says.

Simulations have suggested that in their purest form, carbon nanotube transistors should be able to able to perform five times faster or use five times less energy than silicon transistors, because their ultra-small dimensions allow them to very quickly switch a current signal as it travels across it.

This means longer-lasting phone batteries, or much faster wireless communications or processing speeds, but scientists have to actually build a working computer filled with carbon nanotube transistors before we can know for sure.

Arnold’s team has already managed to scale their wafers up to 2.5 by 2.5 cm transistors (1 inch by 1 inch), so they’re now figuring out how to make the process efficient enough for commercial production.

Like this:

Two astronomers think they have plucked signals from 234 extraterrestrial civilisations.

In 2012, Ermanno Borra of Laval University in Quebec, suggested that lasers might be the method by which we receive first contact from extraterrestrial intelligence.

He hypothesised that we would see periodic bursts of light hidden in the spectrum of the host star of the civilisations, faint and frequent.

We’d discover them through mathematical analysis of the spectrum and the signal.

To send this kind of signal would not be beyond our means currently – should we wish to reveal ourselves to the universe, we could do so with the Helios laser at the Lawrence Livermore National Laboratory.

Borra instructed graduate student Eric Trottier to comb through 2.5 million stars in search of such a signal, who found it in 234 of them, most of which are in the same spectral class as the Sun.

Borra said of the results:

We have to follow a scientific approach, not an emotional one, but intuitively – my emotion speaks now – I strongly suspect that it’s an ETI signal.

Other astronomers are less convinced.

Andrew Siemion, the director of the SETI Research Centre at the University of California Berkeley, said:

There is perhaps no bolder claim that one could make in observational astrophysics than the discovery of intelligent life beyond the Earth.

It’s an incredibly profound subject—and of course that’s why many of us devote our lives to the field and put so much energy into trying to answer these questions. But you can’t make such definitive statements about detections unless you’ve exhausted every possible means of follow-up.

A Breakthrough Listen Initiative, set up to monitor the stars Borra found emitting these stars, later ranked the detection as a zero to one on the Rio Scale for SETI observations, meaning that it is insignificant.

An omni-directional beacon would be ranked a two on the index, whereas an Earth-specific message would be a six.

The scale does give you pause for thought – it’s likely our first contact with extraterrestrial life will be a lot more ambiguous than a giant spaceship landing in Manhattan.