Commentary about nanotech, science policy and communication, society, and the arts

Tag Archives: Singapore

“Feeling no pain” can be a euphemism for being drunk. However, there are some people for whom it’s not a euphemism and they literally feel no pain for one reason or another. One group of people who feel no pain are amputees and a researcher at Johns Hopkins University (Maryland, US) has found a way so they can feel pain again.

Amputees often experience the sensation of a “phantom limb” — a feeling that a missing body part is still there.

That sensory illusion is closer to becoming a reality thanks to a team of engineers at the Johns Hopkins University that has created an electronic skin. When layered on top of prosthetic hands, this e-dermis brings back a real sense of touch through the fingertips.

“After many years, I felt my hand, as if a hollow shell got filled with life again,” says the anonymous amputee who served as the team’s principal volunteer tester.

Made of fabric and rubber laced with sensors to mimic nerve endings, e-dermis recreates a sense of touch as well as pain by sensing stimuli and relaying the impulses back to the peripheral nerves.

“We’ve made a sensor that goes over the fingertips of a prosthetic hand and acts like your own skin would,” says Luke Osborn, a graduate student in biomedical engineering. “It’s inspired by what is happening in human biology, with receptors for both touch and pain.

“This is interesting and new,” Osborn said, “because now we can have a prosthetic hand that is already on the market and fit it with an e-dermis that can tell the wearer whether he or she is picking up something that is round or whether it has sharp points.”

The work – published June 20 in the journal Science Robotics – shows it is possible to restore a range of natural, touch-based feelings to amputees who use prosthetic limbs. The ability to detect pain could be useful, for instance, not only in prosthetic hands but also in lower limb prostheses, alerting the user to potential damage to the device.

Human skin contains a complex network of receptors that relay a variety of sensations to the brain. This network provided a biological template for the research team, which includes members from the Johns Hopkins departments of Biomedical Engineering, Electrical and Computer Engineering, and Neurology, and from the Singapore Institute of Neurotechnology.

Bringing a more human touch to modern prosthetic designs is critical, especially when it comes to incorporating the ability to feel pain, Osborn says.

“Pain is, of course, unpleasant, but it’s also an essential, protective sense of touch that is lacking in the prostheses that are currently available to amputees,” he says. “Advances in prosthesis designs and control mechanisms can aid an amputee’s ability to regain lost function, but they often lack meaningful, tactile feedback or perception.”

That is where the e-dermis comes in, conveying information to the amputee by stimulating peripheral nerves in the arm, making the so-called phantom limb come to life. The e-dermis device does this by electrically stimulating the amputee’s nerves in a non-invasive way, through the skin, says the paper’s senior author, Nitish Thakor, a professor of biomedical engineering and director of the Biomedical Instrumentation and Neuroengineering Laboratory at Johns Hopkins.

“For the first time, a prosthesis can provide a range of perceptions, from fine touch to noxious to an amputee, making it more like a human hand,” says Thakor, co-founder of Infinite Biomedical Technologies, the Baltimore-based company that provided the prosthetic hardware used in the study.

Inspired by human biology, the e-dermis enables its user to sense a continuous spectrum of tactile perceptions, from light touch to noxious or painful stimulus. The team created a “neuromorphic model” mimicking the touch and pain receptors of the human nervous system, allowing the e-dermis to electronically encode sensations just as the receptors in the skin would. Tracking brain activity via electroencephalography, or EEG, the team determined that the test subject was able to perceive these sensations in his phantom hand.

The researchers then connected the e-dermis output to the volunteer by using a noninvasive method known as transcutaneous electrical nerve stimulation, or TENS. In a pain-detection task, the team determined that the test subject and the prosthesis were able to experience a natural, reflexive reaction to both pain while touching a pointed object and non-pain when touching a round object.

The e-dermis is not sensitive to temperature–for this study, the team focused on detecting object curvature (for touch and shape perception) and sharpness (for pain perception). The e-dermis technology could be used to make robotic systems more human, and it could also be used to expand or extend to astronaut gloves and space suits, Osborn says.

The researchers plan to further develop the technology and better understand how to provide meaningful sensory information to amputees in the hopes of making the system ready for widespread patient use.

Johns Hopkins is a pioneer in the field of upper limb dexterous prostheses. More than a decade ago, the university’s Applied Physics Laboratory led the development of the advanced Modular Prosthetic Limb, which an amputee patient controls with the muscles and nerves that once controlled his or her real arm or hand.

In addition to the funding from Space@Hopkins, which fosters space-related collaboration across the university’s divisions, the team also received grants from the Applied Physics Laboratory Graduate Fellowship Program and the Neuroengineering Training Initiative through the National Institute of Biomedical Imaging and Bioengineering through the National Institutes of Health under grant T32EB003383.

The e-dermis was tested over the course of one year on an amputee who volunteered in the Neuroengineering Laboratory at Johns Hopkins. The subject frequently repeated the testing to demonstrate consistent sensory perceptions via the e-dermis. The team has worked with four other amputee volunteers in other experiments to provide sensory feedback.

Here’s a video about this work,

Sarah Zhang’s June 20, 2018 article for The Atlantic reveals a few more details while covering some of the material in the news release,

Osborn and his team added one more feature to make the prosthetic hand, as he puts it, “more lifelike, more self-aware”: When it grasps something too sharp, it’ll open its fingers and immediately drop it—no human control necessary. The fingers react in just 100 milliseconds, the speed of a human reflex. Existing prosthetic hands have a similar degree of theoretically helpful autonomy: If an object starts slipping, the hand will grasp more tightly. Ideally, users would have a way to override a prosthesis’s reflex, like how you can hold your hand on a stove if you really, really want to. After all, the whole point of having a hand is being able to tell it what to do.

Scientists in Singapore were inspired by dragonflies and cicadas according to a March 28, 2018 news item on Nanowerk (Note: A link has been removed),

Studies have shown that the wings of dragonflies and cicadas prevent bacterial growth due to their natural structure. The surfaces of their wings are covered in nanopillars making them look like a bed of nails. When bacteria come into contact with these surfaces, their cell membranes get ripped apart immediately and they are killed. This inspired researchers from the Institute of Bioengineering and Nanotechnology (IBN) of A*STAR to invent an anti-bacterial nano coating for disinfecting frequently touched surfaces such as door handles, tables and lift buttons.

This technology will prove particularly useful in creating bacteria-free surfaces in places like hospitals and clinics, where sterilization is important to help control the spread of infections. Their new research was recently published in the journal Small (“ZnO Nanopillar Coated Surfaces with Substrate-Dependent Superbactericidal Property”)

Image 1: Zinc oxide nanopillars that looked like a bed of nails can kill a broad range of germs when used as a coating on frequently-touched surfaces. Courtesy: A*STAR

80% of common infections are spread by hands, according to the B.C. [province of Canada] Centre for Disease Control1. Disinfecting commonly touched surfaces helps to reduce the spread of harmful germs by our hands, but would require manual and repeated disinfection because germs grow rapidly. Current disinfectants may also contain chemicals like triclosan which are not recognized as safe and effective 2, and may lead to bacterial resistance and environmental contamination if used extensively.

“There is an urgent need for a better way to disinfect surfaces without causing bacterial resistance or harm to the environment. This will help us to prevent the transmission of infectious diseases from contact with surfaces,” said IBN Executive Director Professor Jackie Y. Ying.

To tackle this problem, a team of researchers led by IBN Group Leader Dr Yugen Zhang created a novel nano coating that can spontaneously kill bacteria upon contact. Inspired by studies on dragonflies and cicadas, the IBN scientists grew nanopilllars of zinc oxide, a compound known for its anti-bacterial and non-toxic properties. The zinc oxide nanopillars can kill a broad range of germs like E. coli and S. aureus that are commonly transmitted from surface contact.

Tests on ceramic, glass, titanium and zinc surfaces showed that the coating effectively killed up to 99.9% of germs found on the surfaces. As the bacteria are killed mechanically rather than chemically, the use of the nano coating would not contribute to environmental pollution. Also, the bacteria will not be able to develop resistance as they are completely destroyed when their cell walls are pierced by the nanopillars upon contact.

Further studies revealed that the nano coating demonstrated the best bacteria killing power when it is applied on zinc surfaces, compared with other surfaces. This is because the zinc oxide nanopillars catalyzed the release of superoxides (or reactive oxygen species), which could even kill nearby free floating bacteria that were not in direct contact with the surface. This super bacteria killing power from the combination of nanopillars and zinc broadens the scope of applications of the coating beyond hard surfaces.

Subsequently, the researchers studied the effect of placing a piece of zinc that had been coated with zinc oxide nanopillars into water containing E. coli. All the bacteria were killed, suggesting that this material could potentially be used for water purification.

Dr Zhang said, “Our nano coating is designed to disinfect surfaces in a novel yet practical way. This study demonstrated that our coating can effectively kill germs on different types of surfaces, and also in water. We were also able to achieve super bacteria killing power when the coating was used on zinc surfaces because of its dual mechanism of action. We hope to use this technology to create bacteria-free surfaces in a safe, inexpensive and effective manner, especially in places where germs tend to accumulate.”

IBN has recently received a grant from the National Research Foundation, Prime Minister’s Office, Singapore, under its Competitive Research Programme to further develop this coating technology in collaboration with Tan Tock Seng Hospital for commercial application over the next 5 years.

One final comment, this research reminds me of research into simulating shark skin because that too has bacteria-killing nanostructures. My latest about the sharkskin research is a Sept, 18, 2014 posting.

I was not expecting a Canadian connection but it seems we are heavily invested in this research at the Georgia Institute of Technology (Georgia Tech), from a March 19, 2018 news item on ScienceDaily,

Some novel materials that sound too good to be true turn out to be true and good. An emergent class of semiconductors, which could affordably light up our future with nuanced colors emanating from lasers, lamps, and even window glass, could be the latest example.

These materials are very radiant, easy to process from solution, and energy-efficient. The nagging question of whether hybrid organic-inorganic perovskites (HOIPs) could really work just received a very affirmative answer in a new international study led by physical chemists at the Georgia Institute of Technology.

The researchers observed in an HOIP a “richness” of semiconducting physics created by what could be described as electrons dancing on chemical underpinnings that wobble like a funhouse floor in an earthquake. That bucks conventional wisdom because established semiconductors rely upon rigidly stable chemical foundations, that is to say, quieter molecular frameworks, to produce the desired quantum properties.

“We don’t know yet how it works to have these stable quantum properties in this intense molecular motion,” said first author Felix Thouin, a graduate research assistant at Georgia Tech. “It defies physics models we have to try to explain it. It’s like we need some new physics.”

Quantum properties surprise

Their gyrating jumbles have made HOIPs challenging to examine, but the team of researchers from a total of five research institutes in four countries succeeded in measuring a prototypical HOIP and found its quantum properties on par with those of established, molecularly rigid semiconductors, many of which are graphene-based.

“The properties were at least as good as in those materials and may be even better,” said Carlos Silva, a professor in Georgia Tech’s School of Chemistry and Biochemistry. Not all semiconductors also absorb and emit light well, but HOIPs do, making them optoelectronic and thus potentially useful in lasers, LEDs, other lighting applications, and also in photovoltaics.

The lack of molecular-level rigidity in HOIPs also plays into them being more flexibly produced and applied.

Silva co-led the study with physicist Ajay Ram Srimath Kandada. Their team published the results of their study on two-dimensional HOIPs on March 8, 2018, in the journal Physical Review Materials. Their research was funded by EU Horizon 2020, the Natural Sciences and Engineering Research Council of Canada, the Fond Québécois pour la Recherche, the [National] Research Council of Canada, and the National Research Foundation of Singapore. [emphases mine]

The ‘solution solution’

Commonly, semiconducting properties arise from static crystalline lattices of neatly interconnected atoms. In silicon, for example, which is used in most commercial solar cells, they are interconnected silicon atoms. The same principle applies to graphene-like semiconductors.

“These lattices are structurally not very complex,” Silva said. “They’re only one atom thin, and they have strict two-dimensional properties, so they’re much more rigid.”

“You forcefully limit these systems to two dimensions,” said Srimath Kandada, who is a Marie Curie International Fellow at Georgia Tech and the Italian Institute of Technology. “The atoms are arranged in infinitely expansive, flat sheets, and then these very interesting and desirable optoelectronic properties emerge.”

These proven materials impress. So, why pursue HOIPs, except to explore their baffling physics? Because they may be more practical in important ways.

“One of the compelling advantages is that they’re all made using low-temperature processing from solutions,” Silva said. “It takes much less energy to make them.”

By contrast, graphene-based materials are produced at high temperatures in small amounts that can be tedious to work with. “With this stuff (HOIPs), you can make big batches in solution and coat a whole window with it if you want to,” Silva said.

Funhouse in an earthquake

For all an HOIP’s wobbling, it’s also a very ordered lattice with its own kind of rigidity, though less limiting than in the customary two-dimensional materials.

“It’s not just a single layer,” Srimath Kandada said. “There is a very specific perovskite-like geometry.” Perovskite refers to the shape of an HOIPs crystal lattice, which is a layered scaffolding.

“The lattice self-assembles,” Srimath Kandada said, “and it does so in a three-dimensional stack made of layers of two-dimensional sheets. But HOIPs still preserve those desirable 2D quantum properties.”

Those sheets are held together by interspersed layers of another molecular structure that is a bit like a sheet of rubber bands. That makes the scaffolding wiggle like a funhouse floor.

“At room temperature, the molecules wiggle all over the place. That disrupts the lattice, which is where the electrons live. It’s really intense,” Silva said. “But surprisingly, the quantum properties are still really stable.”

Having quantum properties work at room temperature without requiring ultra-cooling is important for practical use as a semiconductor.

Going back to what HOIP stands for — hybrid organic-inorganic perovskites – this is how the experimental material fit into the HOIP chemical class: It was a hybrid of inorganic layers of a lead iodide (the rigid part) separated by organic layers (the rubber band-like parts) of phenylethylammonium (chemical formula (PEA)2PbI4).

The lead in this prototypical material could be swapped out for a metal safer for humans to handle before the development of an applicable material.

Electron choreography

HOIPs are great semiconductors because their electrons do an acrobatic square dance.

Usually, electrons live in an orbit around the nucleus of an atom or are shared by atoms in a chemical bond. But HOIP chemical lattices, like all semiconductors, are configured to share electrons more broadly.

Energy levels in a system can free the electrons to run around and participate in things like the flow of electricity and heat. The orbits, which are then empty, are called electron holes, and they want the electrons back.

“The hole is thought of as a positive charge, and of course, the electron has a negative charge,” Silva said. “So, hole and electron attract each other.”

The electrons and holes race around each other like dance partners pairing up to what physicists call an “exciton.” Excitons act and look a lot like particles themselves, though they’re not really particles.

Hopping biexciton light

In semiconductors, millions of excitons are correlated, or choreographed, with each other, which makes for desirable properties, when an energy source like electricity or laser light is applied. Additionally, excitons can pair up to form biexcitons, boosting the semiconductor’s energetic properties.

“In this material, we found that the biexciton binding energies were high,” Silva said. “That’s why we want to put this into lasers because the energy you input ends up to 80 or 90 percent as biexcitons.”

Biexcitons bump up energetically to absorb input energy. Then they contract energetically and pump out light. That would work not only in lasers but also in LEDs or other surfaces using the optoelectronic material.

“You can adjust the chemistry (of HOIPs) to control the width between biexciton states, and that controls the wavelength of the light given off,” Silva said. “And the adjustment can be very fine to give you any wavelength of light.”

That translates into any color of light the heart desires.

###

Coauthors of this paper were Stefanie Neutzner and Annamaria Petrozza from the Italian Institute of Technology (IIT); Daniele Cortecchia from IIT and Nanyang Technological University (NTU), Singapore; Cesare Soci from the Centre for Disruptive Photonic Technologies, Singapore; Teddy Salim and Yeng Ming Lam from NTU; and Vlad Dragomir and Richard Leonelli from the University of Montreal. …

Three Canadian science funding agencies plus European and Singaporean science funding agencies but not one from the US ? That’s a bit unusual for research undertaken at a US educational institution.

A team at the National University of Singapore (NUS) is looking for industry partners to help take their air-conditioning technology from the laboratory to the marketplace. First, here’s more about the technology from a January 8, 2018 news item on ScienceDaily,

A team of researchers from the National University of Singapore (NUS) has pioneered a new water-based air-conditioning system that cools air to as low as 18 degrees Celsius without the use of energy-intensive compressors and environmentally harmful chemical refrigerants. This game-changing technology could potentially replace the century-old air-cooling principle that is still being used in our modern-day air-conditioners. Suitable for both indoor and outdoor use, the novel system is portable and it can also be customised for all types of weather conditions.

NUS Engineering researchers developed a novel air cooling technology that could redefine the future of air-conditioning.

Led by Associate Professor Ernest Chua from the Department of Mechanical Engineering at NUS Faculty of Engineering, the team’s novel air-conditioning system is cost-effective to produce, and it is also more eco-friendly and sustainable. The system consumes about 40 per cent less electricity than current compressor-based air-conditioners used in homes and commercial buildings. This translates into more than 40 per cent reduction in carbon emissions. In addition, it adopts a water-based cooling technology instead of using chemical refrigerants such as chlorofluorocarbon and hydrochlorofluorocarbon for cooling, thus making it safer and more environmentally-friendly.

To add another feather to its eco-friendliness cap, the novel system generates potable drinking water while it cools ambient air.

Assoc Prof Chua said, “For buildings located in the tropics, more than 40 per cent of the building’s energy consumption is attributed to air-conditioning. We expect this rate to increase dramatically, adding an extra punch to global warming. First invented by Willis Carrier in 1902, vapour compression air-conditioning is the most widely used air-conditioning technology today. This approach is very energy-intensive and environmentally harmful. In contrast, our novel membrane and water-based cooling technology is very eco-friendly – it can provide cool and dry air without using a compressor and chemical refrigerants. This is a new starting point for the next generation of air-conditioners, and our technology has immense potential to disrupt how air-conditioning has traditionally been provided.”

Innovative membrane and water-based cooling technology

Current air-conditioning systems require a large amount of energy to remove moisture and to cool the dehumidified air. By developing two systems to perform these two processes separately, the NUS Engineering team can better control each process and hence achieve greater energy efficiency.

The novel air-conditioning system first uses an innovative membrane technology – a paper-like material – to remove moisture from humid outdoor air. The dehumidified air is then cooled via a dew-point cooling system that uses water as the cooling medium instead of harmful chemical refrigerants. Unlike vapour compression air-conditioners, the novel system does not release hot air to the environment. Instead, a cool air stream that is comparatively less humid than environmental humidity is discharged – negating the effect of micro-climate. About 12 to 15 litres of potable drinking water can also be harvested after operating the air-conditioning system for a day.

“Our cooling technology can be easily tailored for all types of weather conditions, from humid climate in the tropics to arid climate in the deserts. While it can be used for indoor living and commercial spaces, it can also be easily scaled up to provide air-conditioning for clusters of buildings in an energy-efficient manner. This novel technology is also highly suitable for confined spaces such as bomb shelters or bunkers, where removing moisture from the air is critical for human comfort, as well as for sustainable operation of delicate equipment in areas such as field hospitals, armoured personnel carriers, and operation decks of navy ships as well as aircrafts,” explained Assoc Prof Chua.

The research team is currently refining the design of the air-conditioning system to further improve its user-friendliness. The NUS researchers are also working to incorporate smart features such as pre-programmed thermal settings based on human occupancy and real-time tracking of its energy efficiency. The team hopes to work with industry partners to commercialise the technology. [emphasis mine]

This project is supported by the Building and Construction Authority and National Research Foundation Singapore.

I’m sorry they didn’t include a link to a published paper but I gather that at this time there’s more focus on commercializing the technology than on published papers. I wish the researchers good luck as this cooling technology affords some exciting possibilities in a world that is heating and growing more parched as the NUS press release.notes

As malicious hackers find ever more sophisticated ways to launch attacks, China is about to launch the Jinan Project, the world’s first unhackable computer network, and a major milestone in the development of quantum technology.

Named after the eastern Chinese city where the technology was developed, the network is planned to be fully operational by the end of August 2017. Jinan is the hub of the Beijing-Shanghai quantum network due to its strategic location between the two principal Chinese metropolises.

“We plan to use the network for national defence, finance and other fields, and hope to spread it out as a pilot that if successful can be used across China and the whole world,” commented Zhou Fei, assistant director of the Jinan Institute of Quantum Technology, who was speaking to Britain’s Financial Times.

By launching the network, China will become the first country worldwide to implement quantum technology for a real life, commercial end. It also highlights that China is a key global player in the rush to develop technologies based on quantum principles, with the EU and the United States also vying for world leadership in the field.

The network, known as a Quantum Key Distribution (QKD) network, is more secure than widely used electronic communication equivalents. Unlike a conventional telephone or internet cable, which can be tapped without the sender or recipient being aware, a QKD network alerts both users to any tampering with the system as soon as it occurs. This is because tampering immediately alters the information being relayed, with the disturbance being instantly recognisable. Once fully implemented, it will make it almost impossible for other governments to listen in on Chinese communications.

In the Jinan network, some 200 users from China’s military, government, finance and electricity sectors will be able to send messages safe in the knowledge that only they are reading them. It will be the world’s longest land-based quantum communications network, stretching over 2 000 km.

Also speaking to the ‘Financial Times’, quantum physicist Tim Byrnes, based at New York University’s (NYU) Shanghai campus commented: ‘China has achieved staggering things with quantum research… It’s amazing how quickly China has gotten on with quantum research projects that would be too expensive to do elsewhere… quantum communication has been taken up by the commercial sector much more in China compared to other countries, which means it is likely to pull ahead of Europe and US in the field of quantum communication.’

However, Europe is also determined to also be at the forefront of the ‘quantum revolution’ which promises to be one of the major defining technological phenomena of the twenty-first century. The EU has invested EUR 550 million into quantum technologies and has provided policy support to researchers through the 2016 Quantum Manifesto.

Moreover, with China’s latest achievement (and a previous one already notched up from July 2017 when its quantum satellite – the world’s first – sent a message to Earth on a quantum communication channel), it looks like the race to be crowned the world’s foremost quantum power is well and truly underway…

Quantum entanglement—physics at its strangest—has moved out of this world and into space. In a study that shows China’s growing mastery of both the quantum world and space science, a team of physicists reports that it sent eerily intertwined quantum particles from a satellite to ground stations separated by 1200 kilometers, smashing the previous world record. The result is a stepping stone to ultrasecure communication networks and, eventually, a space-based quantum internet.

“It’s a huge, major achievement,” says Thomas Jennewein, a physicist at the University of Waterloo in Canada. “They started with this bold idea and managed to do it.”

Entanglement involves putting objects in the peculiar limbo of quantum superposition, in which an object’s quantum properties occupy multiple states at once: like Schrödinger’s cat, dead and alive at the same time. Then those quantum states are shared among multiple objects. Physicists have entangled particles such as electrons and photons, as well as larger objects such as superconducting electric circuits.

Theoretically, even if entangled objects are separated, their precarious quantum states should remain linked until one of them is measured or disturbed. That measurement instantly determines the state of the other object, no matter how far away. The idea is so counterintuitive that Albert Einstein mocked it as “spooky action at a distance.”

Starting in the 1970s, however, physicists began testing the effect over increasing distances. In 2015, the most sophisticated of these tests, which involved measuring entangled electrons 1.3 kilometers apart, showed once again that spooky action is real.

Beyond the fundamental result, such experiments also point to the possibility of hack-proof communications. Long strings of entangled photons, shared between distant locations, can be “quantum keys” that secure communications. Anyone trying to eavesdrop on a quantum-encrypted message would disrupt the shared key, alerting everyone to a compromised channel.

But entangled photons degrade rapidly as they pass through the air or optical fibers. So far, the farthest anyone has sent a quantum key is a few hundred kilometers. “Quantum repeaters” that rebroadcast quantum information could extend a network’s reach, but they aren’t yet mature. Many physicists have dreamed instead of using satellites to send quantum information through the near-vacuum of space. “Once you have satellites distributing your quantum signals throughout the globe, you’ve done it,” says Verónica Fernández Mármol, a physicist at the Spanish National Research Council in Madrid. …

Popkin goes on to detail the process for making the discovery in easily accessible (for the most part) writing and in a video and a graphic.

Russell Brandom writing for The Verge in a June 15, 2017 article about the Chinese quantum satellite adds detail about previous work and teams in other countries also working on the challenge (Note: Links have been removed),

Quantum networking has already shown promise in terrestrial fiber networks, where specialized routing equipment can perform the same trick over conventional fiber-optic cable. The first such network was a DARPA-funded connection established in 2003 between Harvard, Boston University, and a private lab. In the years since, a number of companies have tried to build more ambitious connections. The Swiss company ID Quantique has mapped out a quantum network that would connect many of North America’s largest data centers; in China, a separate team is working on a 2,000-kilometer quantum link between Beijing and Shanghai, which would rely on fiber to span an even greater distance than the satellite link. Still, the nature of fiber places strict limits on how far a single photon can travel.

According to ID Quantique, a reliable satellite link could connect the existing fiber networks into a single globe-spanning quantum network. “This proves the feasibility of quantum communications from space,” ID Quantique CEO Gregoire Ribordy tells The Verge. “The vision is that you have regional quantum key distribution networks over fiber, which can connect to each other through the satellite link.”

China isn’t the only country working on bringing quantum networks to space. A collaboration between the UK’s University of Strathclyde and the National University of Singapore is hoping to produce the same entanglement in cheap, readymade satellites called Cubesats. A Canadian team is also developing a method of producing entangled photons on the ground before sending them into space.

I wonder if there’s going to be an invitational event for scientists around the world to celebrate the launch.

Sometime it seems as if scientific research is like a race with everyone competing for first place. As in most sports, there are multiple competitions for various sub-groups but only one important race. The US has held the lead position for decades although always with some anxiety. These days the anxiety is focused on China. A June 15, 2017 news item on ScienceDaily suggests that US dominance is threatened in at least one area of research—the biomedical sector,

American scientific teams still publish significantly more biomedical research discoveries than teams from any other country, a new study shows, and the U.S. still leads the world in research and development expenditures.

But American dominance is slowly shrinking, the analysis finds, as China’s skyrocketing investing on science over the last two decades begins to pay off. Chinese biomedical research teams now rank fourth in the world for total number of new discoveries published in six top-tier journals, and the country spent three-quarters what the U.S. spent on research and development during 2015.

Meanwhile, the analysis shows, scientists from the U.S. and other countries increasingly make discoveries and advancements as part of teams that involve researchers from around the world.

The last 15 years have ushered in an era of “team science” as research funding in the U.S., Great Britain and other European countries, as well as Canada and Australia, stagnated. The number of authors has also grown over time. For example, in 2000 only two percent of the research papers the new study looked include 21 or more authors — a number that increased to 12.5 percent in 2015.

The new findings, published in JCI Insight by a team of University of Michigan researchers, come at a critical time for the debate over the future of U.S. federal research funding. The study is based on a careful analysis of original research papers published in six top-tier and four mid-tier journals from 2000 to 2015, in addition to data on R&D investment from those same years.

The study builds on other work that has also warned of America’s slipping status in the world of science and medical research, and the resulting impact on the next generation of aspiring scientists.

“It’s time for U.S. policy-makers to reflect and decide whether the year-to-year uncertainty in National Institutes of Health budget and the proposed cuts are in our societal and national best interest,” says Bishr Omary, M.D., Ph.D., senior author of the new data-supported opinion piece and chief scientific officer of Michigan Medicine, U-M’s academic medical center. “If we continue on the path we’re on, it will be harder to maintain our lead and, even more importantly, we could be disenchanting the next generation of bright and passionate biomedical scientists who see a limited future in pursuing a scientist or physician-investigator career.”

The analysis charts South Korea’s entry into the top 10 countries for publications, as well as China’s leap from outside the top 10 in 2000 to fourth place in 2015. They also track the major increases in support for research in South Korea and Singapore since the start of the 21st Century.

Meticulous tracking

First author of the study, U-M informationist Marisa Conte, and Omary co-led a team that looked carefully at the currency of modern science: peer-reviewed basic science and clinical research papers describing new findings, published in journals with long histories of accepting among the world’s most significant discoveries.

They reviewed every issue of six top-tier international journals (JAMA, Lancet, the New England Journal of Medicine, Cell, Nature and Science), and four mid-ranking journals (British Medical Journal, JAMA Internal Medicine, Journal of Cell Science, FASEB Journal), chosen to represent the clinical and basic science aspects of research.

The analysis included only papers that reported new results from basic research experiments, translational studies, clinical trials, metanalyses, and studies of disease outcomes. Author affiliations for corresponding authors and all other authors were recorded by country.

The rise in global cooperation is striking. In 2000, 25 percent of papers in the six top-tier journals were by teams that included researchers from at least two countries. In 2015, that figure was closer to 50 percent. The increasing need for multidisciplinary approaches to make major advances, coupled with the advances of Internet-based collaboration tools, likely have something to do with this, Omary says.

The authors, who also include Santiago Schnell, Ph.D. and Jing Liu, Ph.D., note that part of their group’s interest in doing the study sprang from their hypothesis that a flat NIH budget is likely to have negative consequences but they wanted to gather data to test their hypothesis.

They also observed what appears to be an increasing number of Chinese-born scientists who had trained in the U.S. going back to China after their training, where once most of them would have sought to stay in the U.S. In addition, Singapore has been able to recruit several top notch U.S. and other international scientists due to their marked increase in R&D investments.

The same trends appear to be happening in Great Britain, Australia, Canada, France, Germany and other countries the authors studied – where research investing has stayed consistent when measured as a percentage of the U.S. total over the last 15 years.

The authors note that their study is based on data up to 2015, and that in the current 2017 federal fiscal year, funding for NIH has increased thanks to bipartisan Congressional appropriations. The NIH contributes to most of the federal support for medical and basic biomedical research in the U.S. But discussion of cuts to research funding that hinders many federal agencies is in the air during the current debates for the 2018 budget. Meanwhile, the Chinese R&D spending is projected to surpass the U.S. total by 2022.

“Our analysis, albeit limited to a small number of representative journals, supports the importance of financial investment in research,” Omary says. “I would still strongly encourage any child interested in science to pursue their dream and passion, but I hope that our current and future investment in NIH and other federal research support agencies will rise above any branch of government to help our next generation reach their potential and dreams.”

The notion of a race and looking back to see who, if anyone, is gaining on you reminded me of a local piece of sports lore, the Roger Banister-John Landy ‘Miracle Mile’. In the run up to the 1954 Commonwealth Games held in Vancouver, Canada, two runners were known to have broken the 4-minute mile limit (previously thought to have been impossible) and this meeting was considered an historic meeting. Here’s more from the miraclemile1954.com website,

On August 7, 1954 during the British Empire and Commonwealth Games in Vancouver, B.C., England’s Roger Bannister and Australian John Landy met for the first time in the one mile run at the newly constructed Empire Stadium.

Both men had broken the four minute barrier previously that year. Bannister was the first to break the mark with a time of 3:59.4 on May 6th in Oxford, England. Subsequently, on June 21st in Turku, Finland, John Landy became the new record holder with an official time of 3:58.

The world watched eagerly as both men approached the starting blocks. As 35,000 enthusiastic fans looked on, no one knew what would take place on that historic day.

Promoted as “The Mile of the Century”, it would later be known as the “Miracle Mile”.

With only 90 yards to go in one of the world’s most memorable races, John Landy glanced over his left shoulder to check his opponent’s position. At that instant Bannister streaked by him to victory in a Commonwealth record time of 3:58.8. Landy’s second place finish in 3:59.6 marked the first time the four minute mile had been broken by two men in the same race.

The website hosts an image of the moment memorialized in bronze when Landy looks to his left as Banister passes him on his right,

According to an April 12, 2017 news item on ScienceDaily, shapeshifting in response to environmental stimuli is the fourth dimension (I have a link to a posting about 4D printing with another fourth dimension),

A team of researchers from Georgia Institute of Technology and two other institutions has developed a new 3-D printing method to create objects that can permanently transform into a range of different shapes in response to heat.

The team, which included researchers from the Singapore University of Technology and Design (SUTD) and Xi’an Jiaotong University in China, created the objects by printing layers of shape memory polymers with each layer designed to respond differently when exposed to heat.

“This new approach significantly simplifies and increases the potential of 4-D printing by incorporating the mechanical programming post-processing step directly into the 3-D printing process,” said Jerry Qi, a professor in the George W. Woodruff School of Mechanical Engineering at Georgia Tech. “This allows high-resolution 3-D printed components to be designed by computer simulation, 3-D printed, and then directly and rapidly transformed into new permanent configurations by simply heating.”

The research was reported April 12 [2017] in the journal Science Advances, a publication of the American Association for the Advancement of Science. The work is funded by the U.S. Air Force Office of Scientific Research, the U.S. National Science Foundation and the Singapore National Research Foundation through the SUTD DManD Centre.

4D printing is an emerging technology that allows a 3D-printed component to transform its structure by exposing it to heat, light, humidity, or other environmental stimuli. This technology extends the shape creation process beyond 3D printing, resulting in additional design flexibility that can lead to new types of products which can adjust its functionality in response to the environment, in a pre-programmed manner. However, 4D printing generally involves complex and time-consuming post-processing steps to mechanically programme the component. Furthermore, the materials are often limited to soft polymers, which limit their applicability in structural scenarios.

A group of researchers from the SUTD, Georgia Institute of Technology, Xi’an Jiaotong University and Zhejiang University has introduced an approach that significantly simplifies and increases the potential of 4D printing by incorporating the mechanical programming post-processing step directly into the 3D printing process. This allows high-resolution 3D-printed components to be designed by computer simulation, 3D printed, and then directly and rapidly transformed into new permanent configurations by using heat. This approach can help save printing time and materials used by up to 90%, while completely eliminating the time-consuming mechanical programming process from the design and manufacturing workflow.

“Our approach involves printing composite materials where at room temperature one material is soft but can be programmed to contain internal stress, and the other material is stiff,” said Dr. Zhen Ding of SUTD. “We use computational simulations to design composite components where the stiff material has a shape and size that prevents the release of the programmed internal stress from the soft material after 3D printing. Upon heating, the stiff material softens and allows the soft material to release its stress. This results in a change – often dramatic – in the product shape.” This new shape is fixed when the product is cooled, with good mechanical stiffness. The research demonstrated many interesting shape changing parts, including a lattice that can expand by almost 8 times when heated.

This new shape becomes permanent and the composite material will not return to its original 3D-printed shape, upon further heating or cooling. “This is because of the shape memory effect,” said Prof. H. Jerry Qi of Georgia Tech. “In the two-material composite design, the stiff material exhibits shape memory, which helps lock the transformed shape into a permanent one. Additionally, the printed structure also exhibits the shape memory effect, i.e. it can then be programmed into further arbitrary shapes that can always be recovered to its new permanent shape, but not its 3D-printed shape.”

Said SUTD’s Prof. Martin Dunn, “The key advance of this work, is a 4D printing method that is dramatically simplified and allows the creation of high-resolution complex 3D reprogrammable products; it promises to enable myriad applications across biomedical devices, 3D electronics, and consumer products. It even opens the door to a new paradigm in product design, where components are designed from the onset to inhabit multiple configurations during service.”

Here’s a video,

Uploaded on Apr 17, 2017

A research team led by the Singapore University of Technology and Design’s (SUTD) Associate Provost of Research, Professor Martin Dunn, has come up with a new and simplified 4D printing method that uses a 3D printer to rapidly create 3D objects, which can permanently transform into a range of different shapes in response to heat.

A team of scientists led by Associate Professor Yang Hyunsoo from the Department of Electrical and Computer Engineering at the National University of Singapore’s (NUS) Faculty of Engineering has invented a novel ultra-thin multilayer film which could harness the properties of tiny magnetic whirls, known as skyrmions, as information carriers for storing and processing data on magnetic media.

The nano-sized thin film, which was developed in collaboration with researchers from Brookhaven National Laboratory, Stony Brook University, and Louisiana State University, is a critical step towards the design of data storage devices that use less power and work faster than existing memory technologies. The invention was reported in prestigious scientific journal Nature Communications on 10 March 2017.

The digital transformation has resulted in ever-increasing demands for better processing and storing of large amounts of data, as well as improvements in hard drive technology. Since their discovery in magnetic materials in 2009, skyrmions, which are tiny swirling magnetic textures only a few nanometres in size, have been extensively studied as possible information carriers in next-generation data storage and logic devices.

Skyrmions have been shown to exist in layered systems, with a heavy metal placed beneath a ferromagnetic material. Due to the interaction between the different materials, an interfacial symmetry breaking interaction, known as the Dzyaloshinskii-Moriya interaction (DMI), is formed, and this helps to stabilise a skyrmion. However, without an out-of-plane magnetic field present, the stability of the skyrmion is compromised. In addition, due to its tiny size, it is difficult to image the nano-sized materials.

To address these limitations, the researchers worked towards creating stable magnetic skyrmions at room temperature without the need for a biasing magnetic field.

Unique material for data storage

The NUS team, which also comprises Dr Shawn Pollard and Ms Yu Jiawei from the NUS Department of Electrical and Computer Engineering, found that a large DMI could be maintained in multilayer films composed of cobalt and palladium, and this is large enough to stabilise skyrmion spin textures.

In order to image the magnetic structure of these films, the NUS researchers, in collaboration with Brookhaven National Laboratory in the United States, employed Lorentz transmission electron microscopy (L-TEM). L-TEM has the ability to image magnetic structures below 10 nanometres, but it has not been used to observe skyrmions in multilayer geometries previously as it was predicted to exhibit zero signal. However, when conducting the experiments, the researchers found that by tilting the films with respect to the electron beam, they found that they could obtain clear contrast consistent with that expected for skyrmions, with sizes below 100 nanometres.

Dr Pollard explained, “It has long been assumed that there is no DMI in a symmetric structure like the one present in our work, hence, there will be no skyrmion. It is really unexpected for us to find both large DMI and skyrmions in the multilayer film we engineered. What’s more, these nanoscale skyrmions persisted even after the removal of an external biasing magnetic field, which are the first of their kind.”

Assoc Prof Yang added, “This experiment not only demonstrates the usefulness of L-TEM in studying these systems, but also opens up a completely new material in which skyrmions can be created. Without the need for a biasing field, the design and implementation of skyrmion based devices are significantly simplified. The small size of the skyrmions, combined with the incredible stability generated here, could be potentially useful for the design of next-generation spintronic devices that are energy efficient and can outperform current memory technologies.”

Next step

Assoc Prof Yang and his team are currently looking at how nanoscale skyrmions interact with each other and with electrical currents, to further the development of skyrmion based electronics.

The announcement that a significant portion of the OECD’s (Organization for Economic Cooperation and Development) dossiers on 11 nanomaterials have next to no value for assessing risk seems a harsh judgment from the Center for International Environmental Law (CIEL). From a March 1, 2017 posting by Lynn L. Bergeson on the Nanotechnology Now,

On February 23, 2017, the Center for International Environmental Law (CIEL) issued a press release announcing a new report, commissioned by CIEL, the European Environmental Citizens’ Organization for Standardization (ECOS), and the Oeko-Institute, that “shows that most of the information made available by the Sponsorship Testing Programme of the Organisation for Economic Co-operation and Development (OECD) is of little to no value for the regulatory risk assessment of nanomaterials.”

The study published today [Feb. 23, 2017] was delivered by the Institute of Occupational Medicine (IOM) based in Singapore. IOM screened the 11,500 pages of raw data of the OECD dossiers on 11 nanomaterials, and analysed all characterisation and toxicity data on three specific nanomaterials – fullerenes, single-walled carbon nanotubes, and zinc oxide.

“EU policy makers and industry are using the existence of the data to dispel concerns about the potential health and environmental risks of manufactured nanomaterials,” said David Azoulay, Senior Attorney for CIEL. “When you analyse the data, in most cases, it is impossible to assess what material was actually tested. The fact that data exists about a nanomaterial does not mean that the information is reliable to assess the hazards or risks of the material.”

The dossiers were published in 2015 by the OECD’s Working Party on Manufactured Nanomaterials (WPMN), which has yet to draw conclusions on the data quality. Despite this missing analysis, some stakeholders participating in EU policy-making – notably the European Chemicals Agency (ECHA) and the European Commission’s Joint Research Centre – have presented the dossiers as containing information on nano-specific human health and environmental impacts. Industry federations and individual companies have taken this a step further emphasizing that there is enough information available to discard most concerns about potential health or environmental risks of manufactured nanomaterials.

“Our study shows these claims that there is sufficient data available on nanomaterials are not only false, but dangerously so,” said Doreen Fedrigo, Senior Policy Officer of ECOS. ”The lack of nano-specific information in the dossiers means that the results of the tests cannot be used as evidence of no ‘nano-effect’ of the tested material. This information is crucial for regulators and producers who need to know the hazard profile of these materials. Analysing the dossiers has shown that legislation detailing nano-specific information requirements is crucial for the regulatory risk assessment of nanomaterials.”

The report provides important recommendations on future steps in the governance of nanomaterials. “Based on our analysis, serious gaps in current dossiers must be filled in with characterisation information, preparation protocols, and exposure data,” said Andreas Hermann of the Oeko-Institute. “Using these dossiers as they are and ignoring these recommendations would mean making decisions on the safety of nanomaterials based on faulty and incomplete data. Our health and environment requires more from producers and regulators.”

The Sponsorship Testing Programme of the Working Party on Manufactured Nanomaterials (WPMN) of the Organisation for Economic Co-operation and Development (OECD) started in 2007 with the aim to test a selection of 13 representative nanomaterials for many endpoints. The main objectives of the programme were to better understand what information on intrinsic properties of the nanomaterials might be relevant for exposure and hazards assessment and assess the validity of OECD chemicals Test Guidelines for nanomaterials. The testing programme concluded in 2015 with the publication of dossiers on 11 nanomaterials: 11,500 pages of raw data to be analysed and interpreted.

The WPMN has not drawn conclusions on the data quality, but some stakeholders participating in EU policy-making – notably the European Chemicals Agency and the European Commission’s Joint Research Centre – presented the dossiers as containing much scientific information that provided a better understanding of their nano-specific human health and environmental impacts. Industry federations and individual companies echoed the views, highlighting that there was enough information available to discard most concerns about potential health or environmental risks of manufactured nanomaterials.

The Center for International Environmental Law (CIEL), the European Citizens’ Organisation for Standardisation (ECOS) and the Öko-Institut commissioned scientific analysis of these dossiers to assess the relevance of the data for regulatory risk assessment.

Most studies and documents in the dossiers contain insufficient characterisation data about the specific nanomaterial addressed (size, particle distribution, surface shape, etc.), making it impossible to assess what material was actually tested.

This makes it impossible to make any firm statements regarding the nano-specificity of the hazard data published, or the relationship between observed effects and specific nano-scale properties.

Less than 2% of the study records provide detail on the size of the nanomaterial tested. Most studies use mass rather than number or size distribution (so not following scientifically recommended reporting practice).

The absence of details on the method used to prepare the nanomaterial makes it virtually impossible to correlate an identified hazard with specific nanomaterial characteristic. Since the studies do not indicate dispersion protocols used, it is impossible to assess whether the final dispersion contained the intended mass concentration (or even the actual presence of nanomaterials in the test system), how much agglomeration may have occurred, and how the preparation protocols may have influenced the size distribution.

There is not enough nano-specific information in the dossiers to inform about nano-characteristics of the raw material that influence their toxicology. This information is important for regulators and its absence makes information in the dossier irrelevant to develop read-across guidelines.

Only about half of the endpoint study records using OECD Test Guideliness (TGs) were delivered using unaltered OECD TGs, thereby respecting the Guidelines’ requirements. The reasons for modifications of the TGs used in the tests are not clear from the documentation. This includes whether the study record was modified to account for challenges related to specific nanomaterial properties or for other, non-nano-specific reasons.

The studies do not contain systematic testing of the influence of nano-specific characteristics on the study outcome, and they do not provide the data needed to assess the effect of nano-scale features on the Test Guidelines. Given the absence of fundamental information on nanomaterial characteristics, the dossiers do not provide evidence of the applicability of existing OECD Test Guidelines to nanomaterials.

The analysis therefore dispels several myths created by some stakeholders following publication of the dossiers and provides important perspective for the governance of nanomaterials. In particular, the analysis makes recommendations to:

Systematically assess the validity of existing Test Guidelines for relevance to nanomaterials

Develop Test Guidelines for dispersion and other test preparations

Define the minimum characteristics of nanomaterials that need to be reported

Support the build-up of exposure database

Fill the gaps in current dossiers with characterisation information, preparation protocols and exposure data

This is not my area of expertise and while I find the language a bit inflammatory, it’s my understanding that there are great gaps in our understanding of nanomaterials and testing for risk assessment has been criticized for many of the reasons pointed out by CIEL, ECOS, and the Oeko-Institute.

Fullerex is a leading independent broker of nanomaterials and nano-intermediates. Our mission is to support the advancement of nanotechnology in creating radical, transformative and sustainable improvement to society. We are dedicated to achieving these aims by accelerating the commercialisation and usage of nanomaterials across industry and beyond. Fullerex is active in market development and physical trading of advanced materials. We generate demand for nanomaterials across synergistic markets by stimulating innovation with end-users and ensuring robust supply chains are in place to address the growing commercial trade interest. Our end-user markets include Polymers and Polymer Composites, Coatings, Tyre and Rubber, Cementitious Composites, 3D Printing and Printed Electronics, the Energy sector, Lubricating Oils and Functional Fluids. The materials we cover: Nanomaterials: Includes fullerenes, carbon nanotubes and graphene, metal and metal oxide nanoparticles, and organic-inorganic hybrids. Supplied as raw nanopowders or ready-to-use dispersions and concentrates. Nano-intermediates: Producer goods and semi-finished products such as nano-enabled coatings, polymer masterbatches, conductive inks, thermal interface materials and catalysts.

As for Tom Eldridge, here’s more about him, his brother, and the company from the Fullerex About page,

Fullerex was founded by Joe and Tom Eldridge, brothers with a keen interest in nanotechnology and the associated emerging market for nanomaterials.

Joe has a strong background in trading with nearly 10 years’ experience as a stockbroker, managing client accounts for European Equities and FX. At University he read Mathematics at Imperial College London gaining a BSc degree and has closely followed the markets for disruptive technologies and advanced materials for a number of years.

Tom worked in the City of London for 7 years in commercial roles throughout his professional career, with an expertise in market data, financial and regulatory news. In his academic background, he earned a BSc degree in Physics and Philosophy at Kings College London and is a member of the Institute of Physics.

As a result, Fullerex has the strong management composition that allows the company to support the growth of the nascent and highly promising nanomaterials industry. Fullerex is a flexible company with drive, enthusiasm and experience, committed to aiding the development of this market.

Getting back to the matter at hand, that’s a rather provocative title for Tom Eldridge’s essay,. given that he’s a Brit and (I believe) the Brits viewed themselves as leaders in the ‘graphene race’ but he offers a more nuanced analysis than might be expected from the title. First, the patent landscape (from Eldridge’s Jan. 5, 2017 essay),

As competition to exploit the “wonder material” has intensified around the world, detailed reports have so far been published which set out an in-depth depiction of the global patent landscape for graphene, notably from CambridgeIP and the UK Intellectual Property Office, in 2013 and 2015 respectively. Ostensibly the number of patents and patent applications both indicated that China was leading the innovation in graphene technology. However, on closer inspection it became less clear as to how closely the patent figures themselves reflect actual progress and whether this will translate into real economic impact. Some of the main reasons to be doubtful included:

– 98% of the Chinese patent applications only cover China, so therefore have no worldwide monopoly.
– A large number of the Chinese patents are filed in December, possibly due to demand to meet patent quotas. The implication being that the patent filings follow a politically driven agenda, rather than a purely innovation or commercially driven agenda.
– In general, inventors could be more likely to file for patent protection in some countries rather than others e.g. for tax purposes. Which therefore does not give a truly accurate picture of where all the actual research activity is based.
– Measuring the proportion of graphene related patents to overall patents is more indicative of graphene specialisation, which shows that Singapore has the largest proportion of graphene patents, followed by China, then South Korea.

Following the recent launch of the latest edition of the Bulk Graphene Pricing Report, which is available exclusively through The Graphene Council, Fullerex has updated its comprehensive list of graphene producers worldwide, and below is a summary of the number of graphene producers by country in 2017.

Summary Table Showing the Number of Graphene Producers by Country and Region

The total number of graphene producers identified is 142, across 27 countries. This research expands upon previous surveys of the graphene industry, such as the big data analysis performed by Nesta in 2015 (Shapira, 2015). The study by Nesta [formerly NESTA, National Endowment for Science, Technology and the Arts) is an independent charity that works to increase the innovation capacity of the UK; see Wikipedia here for more about NESTA] revealed 65 producers throughout 16 countries but was unable to glean accurate data on producers in Asia, particularly China.

As we can now see however from the data collected by Fullerex, China has the largest number of graphene producers, followed by the USA, and then the UK.

In addition to having more companies active in the production and sale of graphene than any other country, China also holds about 2/3rds of the global production capacity, according to Fullerex.

Eldridge goes on to note that the ‘graphene industry’ won’t truly grow and develop until there are substantive applications for the material. He also suggests taking another look at the production figures,

As with the patent landscape, rather than looking at the absolute figures, we can review the numbers in relative terms. For instance, if we normalise to account for the differences in the size of each country, by looking at the number of producers as a proportion of GDP, we see the following: Spain (7.18), UK (4.48), India (3.73), China (3.57), Canada (3.28) [emphasis mine], USA (1.79) (United Nations, 2013).

Unsurprisingly, each leading country has a national strategy for economic development which involves graphene prominently.

For instance, The Spanish Council for Scientific Research has lent 9 of its institutes along with 10 universities and other public R&D labs involved in coordinating graphene projects with industry.

The Natural Sciences and Engineering Research Council of Canada [NSERC] has placed graphene as one of five research topics in its target area of “Advanced Manufacturing” for Strategic Partnership Grants.

The UK government highlights advanced materials as one of its Eight Great Technologies, within which graphene is a major part of, having received investment for the NGI and GEIC buildings, along with EPSRC and Innovate UK projects. I wrote previously about the UK punching above its weight in terms of research, ( http://fullerex.com/index.php/articles/130-the-uk-needs-an-industrial-revolution-can-graphene-deliver/ ) but that R&D spending relative to GDP was too low compared to other developed nations. It is good to see that investment into graphene production in the UK is bucking that trend, and we should anticipate this will provide a positive economic outcome.

Yes, I’m particularly interested in the fact Canada becomes more important as a producer when the numbers are relative but it is interesting to compare the chart with Eldridge’s text and to note how importance shifts depending on what numbers are being considered.

I recommend reading Eldridge’s piece in its entirety.

A few notes about graphene in Canada

By the way, the information in Eldridge’s essay about NSERC’s placement of graphene as a target area for grants is news to me. (As I have often noted here, I get more information about the Canadian nano scene from international sources than I do from our national sources.)

Happily I do get some home news such as a Jan. 5, 2017 email update from Lomiko Metals, a Canadian junior exploration company focused on graphite and lithium. The email provides the latest information from the company (as I’m not an expert in business or mining this is not an endorsement),

On December 13, 2016 we were excited to announce the completion of our drill program at the La Loutre flake graphite property. We received very positive results from our 1550 meter drilling program in 2015 in the area we are drilling now. In that release I stated, ”The intercepts of multiple zones of mineralization in the Refractory Zone where we have reported high grade intercepts previously is a very promising sign. The samples have been rushed to the ALS Laboratory for full assay testing,” We hope to have the results of those assays shortly.

December 16, 2016 Lomiko announced a 10:1 roll back of our shares. We believe that this roll back is important as we work towards securing long term equity financing for the company. Lomiko began trading on the basis of the roll back on December 19.

We believe that Graphite has a bright future because of the many new products that will rely on the material. I have attached a link to a video on Lomiko, Graphite and Graphene.

January 3, 2017 Lomiko announced the extension and modification of its option agreements with Canadian Strategic Metals Inc. for the La Loutre and Lac des Iles properties. The effect of this extension is to give Lomiko additional time to complete the required work under the agreements.

Going forward Lomiko is in a much stronger position as the result of our share roll back. Potential equity funders who are very interested in our forthcoming assay results from La Loutre and the overall prospects of the company, have been reassured by our share consolidation.

Looking forward to 2017, we anticipate the assays of the La Loutre drilling to be delivered in the next 90 days, sooner we hope. We also anticipate additional equity funding will become available for the further exploration and delineation of the La Loutre and Lac des Iles properties and deposits.

More generally, we are confident that the market for large flake graphite will become firmer in 2017. Lomikos strategy of identifying near surface, ready to mine, graphite nodes puts us in the position to take advantage of improvements in the graphite price without having to commit large sums to massive mine development. As we identify and analyze the graphite nodes we are finding we increase the potential resources of the company. 2017 should see significantly improved resource estimates for Lomikos properties.

As I wasn’t familiar with the term ‘roll back of shares’, I looked it up and found this in an April 18, 2012 posting by Dudley Pierce Baker on kitco.com,

As a general rule, we hate to see an announcement of a share rollback, however, there exceptions which we cover below. Investors should always be aware that if a company has, say over 150 million shares outstanding, in our opinion, it is a potential candidate for a rollback and the announcement should not come as a surprise.

Weak markets, a low share price, a large number of shares outstanding, little or no cash and you have a company which is an idea candidate for a rollback.

The basic concept of a rollback or consolidation in a company’s shares is rather simple.

…

We are witnessing a few cases of rollbacks not with the purpose of raising more money but rather to facilitate the listing of the company’s shares on the NYSE [New York Stock Exchange] Amex.

…

I have no idea what situation Lomiko finds itself in but it should be noted that graphere research has been active since 2004 when the first graphene sheets were extracted from graphite. This is a relatively new field of endeavour and Lomiko (along with other companies) is in the position of pioneering the effort here in Canada. That said, there are many competitors to graphene and major international race to commercialize nanotechnology-enabled products.

Are there any leaders in the ‘graphene race?

Getting back to the question in the headline, I don’t think there are any leaders at the moment. No one seems to have what they used to call “a killer app,” that one application/product that everyone wants and which drive demand for graphene.