Tag Archives: smartphones

Caption: Novel nanolaser leverages the same color-changing mechanism that a chameleon uses to camouflage its skin. Credit: Egor Kamelev Courtesy: Northwestern University

I wish there was some detail included about how those colo(u)rs were achieved in that photograph. Strangely, Northwestern University (Chicago, Illinois, US) is more interested in describing the technology that chameleons have inspired. A June 20, 2018 news item on ScienceDaily announces the research,

As a chameleon shifts its color from turquoise to pink to orange to green, nature’s design principles are at play. Complex nano-mechanics are quietly and effortlessly working to camouflage the lizard’s skin to match its environment.

Inspired by nature, a Northwestern University team has developed a novel nanolaser that changes colors using the same mechanism as chameleons. The work could open the door for advances in flexible optical displays in smartphones and televisions, wearable photonic devices and ultra-sensitive sensors that measure strain.

“Chameleons can easily change their colors by controlling the spacing among the nanocrystals on their skin, which determines the color we observe,” said Teri W. Odom, Charles E. and Emma H. Morrison Professor of Chemistry in Northwestern’s Weinberg College of Arts and Sciences. “This coloring based on surface structure is chemically stable and robust.”

The same way a chameleon controls the spacing of nanocrystals on its skin, the Northwestern team’s laser exploits periodic arrays of metal nanoparticles on a stretchable, polymer matrix. As the matrix either stretches to pull the nanoparticles farther apart or contracts to push them closer together, the wavelength emitted from the laser changes wavelength, which also changes its color.

“Hence, by stretching and releasing the elastomer substrate, we could select the emission color at will,” Odom said.

The resulting laser is robust, tunable, reversible and has a high sensitivity to strain. These properties are critical for applications in responsive optical displays, on-chip photonic circuits and multiplexed optical communication.

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference. The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s influential 1982 book, ‘Orality and Literacy’, (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

…

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

…

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

…

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

…

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

…

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

Some day, your smartphone might completely conform to your wrist, and when it does, it might be covered in pure gold, thanks to researchers at Missouri University of Science and Technology.

Nokia, a Finnish telecommunications company, was promoting its idea for a smartphone ‘and more’ that could be worn around your wrist in a concept called the Morph. It was introduced in 2008 at the Museum of Modern Art in New York City (see my March 20, 2010 posting for one of my last updates on this moribund project). Here’s Nokia’s Morph video (almost 6 mins.),

Getting back to the present day, here’s what the Missouri researchers are working on,

An example of a gold foil peeled from single crystal silicon. Reprinted with permission from Naveen Mahenderkar et al., Science [355]:[1203] (2017)

Writing in the March 17 [2017] issue of the journal Science, the S&T researchers say they have developed a way to “grow” thin layers of gold on single crystal wafers of silicon, remove the gold foils, and use them as substrates on which to grow other electronic materials. The research team’s discovery could revolutionize wearable or “flexible” technology research, greatly improving the versatility of such electronics in the future.

According to lead researcher Jay A. Switzer, the majority of research into wearable technology has been done using polymer substrates, or substrates made up of multiple crystals. “And then they put some typically organic semiconductor on there that ends up being flexible, but you lose the order that (silicon) has,” says Switzer, Donald L. Castleman/FCR Endowed Professor of Discovery in Chemistry at S&T.

Because the polymer substrates are made up of multiple crystals, they have what are called grain boundaries, says Switzer. These grain boundaries can greatly limit the performance of an electronic device.

“Say you’re making a solar cell or an LED,” he says. “In a semiconductor, you have electrons and you have holes, which are the opposite of electrons. They can combine at grain boundaries and give off heat. And then you end up losing the light that you get out of an LED, or the current or voltage that you might get out of a solar cell.”

Most electronics on the market are made of silicon because it’s “relatively cheap, but also highly ordered,” Switzer says.

“99.99 percent of electronics are made out of silicon, and there’s a reason – it works great,” he says. “It’s a single crystal, and the atoms are perfectly aligned. But, when you have a single crystal like that, typically, it’s not flexible.”

By starting with single crystal silicon and growing gold foils on it, Switzer is able to keep the high order of silicon on the foil. But because the foil is gold, it’s also highly durable and flexible.

“We bent it 4,000 times, and basically the resistance didn’t change,” he says.

The gold foils are also essentially transparent because they are so thin. According to Switzer, his team has peeled foils as thin as seven nanometers.

Switzer says the challenge his research team faced was not in growing gold on the single crystal silicon, but getting it to peel off as such a thin layer of foil. Gold typically bonds very well to silicon.

“So we came up with this trick where we could photo-electrochemically oxidize the silicon,” Switzer says. “And the gold just slides off.”

Photoelectrochemical oxidation is the process by which light enables a semiconductor material, in this case silicon, to promote a catalytic oxidation reaction.

Switzer says thousands of gold foils—or foils of any number of other metals—can be made from a single crystal wafer of silicon.

The research team’s discovery can be considered a “happy accident.” Switzer says they were looking for a cheap way to make single crystals when they discovered this process.

“This is something that I think a lot of people who are interested in working with highly ordered materials like single crystals would appreciate making really easily,” he says. “Besides making flexible devices, it’s just going to open up a field for anybody who wants to work with single crystals.”

The conversion of bacteria from an enemy to be vanquished at all costs to a ‘frenemy’, a friendly enemy supplying possible solutions for problems is fascinating. An Oct. 26, 2016 news item on Nanowerk falls into the ‘frenemy’ camp,

A new prototype of a lithium-sulphur battery – which could have five times the energy density of a typical lithium-ion battery – overcomes one of the key hurdles preventing their commercial development by mimicking the structure of the cells which allow us to absorb nutrients.

Researchers have developed a prototype of a next-generation lithium-sulphur battery which takes its inspiration in part from the cells lining the human intestine. The batteries, if commercially developed, would have five times the energy density of the lithium-ion batteries used in smartphones and other electronics.

The new design, by researchers from the University of Cambridge, overcomes one of the key technical problems hindering the commercial development of lithium-sulphur batteries, by preventing the degradation of the battery caused by the loss of material within it. The results are reported in the journal Advanced Functional Materials.

Working with collaborators at the Beijing Institute of Technology, the Cambridge researchers based in Dr Vasant Kumar’s team in the Department of Materials Science and Metallurgy developed and tested a lightweight nanostructured material which resembles villi, the finger-like protrusions which line the small intestine. In the human body, villi are used to absorb the products of digestion and increase the surface area over which this process can take place.

In the new lithium-sulphur battery, a layer of material with a villi-like structure, made from tiny zinc oxide wires, is placed on the surface of one of the battery’s electrodes. This can trap fragments of the active material when they break off, keeping them electrochemically accessible and allowing the material to be reused.

“It’s a tiny thing, this layer, but it’s important,” said study co-author Dr Paul Coxon from Cambridge’s Department of Materials Science and Metallurgy. “This gets us a long way through the bottleneck which is preventing the development of better batteries.”

A typical lithium-ion battery is made of three separate components: an anode (negative electrode), a cathode (positive electrode) and an electrolyte in the middle. The most common materials for the anode and cathode are graphite and lithium cobalt oxide respectively, which both have layered structures. Positively-charged lithium ions move back and forth from the cathode, through the electrolyte and into the anode.

The crystal structure of the electrode materials determines how much energy can be squeezed into the battery. For example, due to the atomic structure of carbon, each carbon atom can take on six lithium ions, limiting the maximum capacity of the battery.

Sulphur and lithium react differently, via a multi-electron transfer mechanism meaning that elemental sulphur can offer a much higher theoretical capacity, resulting in a lithium-sulphur battery with much higher energy density. However, when the battery discharges, the lithium and sulphur interact and the ring-like sulphur molecules transform into chain-like structures, known as a poly-sulphides. As the battery undergoes several charge-discharge cycles, bits of the poly-sulphide can go into the electrolyte, so that over time the battery gradually loses active material.

The Cambridge researchers have created a functional layer which lies on top of the cathode and fixes the active material to a conductive framework so the active material can be reused. The layer is made up of tiny, one-dimensional zinc oxide nanowires grown on a scaffold. The concept was trialled using commercially-available nickel foam for support. After successful results, the foam was replaced by a lightweight carbon fibre mat to reduce the battery’s overall weight.

“Changing from stiff nickel foam to flexible carbon fibre mat makes the layer mimic the way small intestine works even further,” said study co-author Dr Yingjun Liu.

This functional layer, like the intestinal villi it resembles, has a very high surface area. The material has a very strong chemical bond with the poly-sulphides, allowing the active material to be used for longer, greatly increasing the lifespan of the battery.

“This is the first time a chemically functional layer with a well-organised nano-architecture has been proposed to trap and reuse the dissolved active materials during battery charging and discharging,” said the study’s lead author Teng Zhao, a PhD student from the Department of Materials Science & Metallurgy. “By taking our inspiration from the natural world, we were able to come up with a solution that we hope will accelerate the development of next-generation batteries.”

For the time being, the device is a proof of principle, so commercially-available lithium-sulphur batteries are still some years away. Additionally, while the number of times the battery can be charged and discharged has been improved, it is still not able to go through as many charge cycles as a lithium-ion battery. However, since a lithium-sulphur battery does not need to be charged as often as a lithium-ion battery, it may be the case that the increase in energy density cancels out the lower total number of charge-discharge cycles.

“This is a way of getting around one of those awkward little problems that affects all of us,” said Coxon. “We’re all tied in to our electronic devices – ultimately, we’re just trying to make those devices work better, hopefully making our lives a little bit nicer.”

On the heels of Samsung’s Galaxy Note 7 recall due to fires (see Alex Fitzpatrick’s Sept. 9, 2016 article for Time magazine for a good description of lithium-ion batteries and why they catch fire; see my May 29, 2013 posting on lithium-ion batteries, fires [including the airplane fires], and nanotechnology risk assessments), there’s new research on lithium-ion batteries and fires from China. From an Oct. 21, 2016 news item on Nanotechnology Now,

Dozens of dangerous gases are produced by the batteries found in billions of consumer devices, like smartphones and tablets, according to a new study. The research, published in Nano Energy, identified more than 100 toxic gases released by lithium batteries, including carbon monoxide.

The gases are potentially fatal, they can cause strong irritations to the skin, eyes and nasal passages, and harm the wider environment. The researchers behind the study, from the Institute of NBC Defence and Tsinghua University in China, say many people may be unaware of the dangers of overheating, damaging or using a disreputable charger for their rechargeable devices.

In the new study, the researchers investigated a type of rechargeable battery, known as a “lithium-ion” battery, which is placed in two billion consumer devices every year.

“Nowadays, lithium-ion batteries are being actively promoted by many governments all over the world as a viable energy solution to power everything from electric vehicles to mobile devices. The lithium-ion battery is used by millions of families, so it is imperative that the general public understand the risks behind this energy source,” explained Dr. Jie Sun, lead author and professor at the Institute of NBC Defence.

The dangers of exploding batteries have led manufacturers to recall millions of devices: Dell recalled four million laptops in 2006 and millions of Samsung Galaxy Note 7 devices were recalled this month after reports of battery fires. But the threats posed by toxic gas emissions and the source of these emissions are not well understood.

Dr. Sun and her colleagues identified several factors that can cause an increase in the concentration of the toxic gases emitted. A fully charged battery will release more toxic gases than a battery with 50 percent charge, for example. The chemicals contained in the batteries and their capacity to release charge also affected the concentrations and types of toxic gases released.

Identifying the gases produced and the reasons for their emission gives manufacturers a better understanding of how to reduce toxic emissions and protect the wider public, as lithium-ion batteries are used in a wide range of environments.

“Such dangerous substances, in particular carbon monoxide, have the potential to cause serious harm within a short period of time if they leak inside a small, sealed environment, such as the interior of a car or an airplane compartment,” Dr. Sun said.

Almost 20,000 lithium-ion batteries were heated to the point of combustion in the study, causing most devices to explode and all to emit a range of toxic gases. Batteries can be exposed to such temperature extremes in the real world, for example, if the battery overheats or is damaged in some way.

The researchers now plan to develop this detection technique to improve the safety of lithium-ion batteries so they can be used to power the electric vehicles of the future safely.

“We hope this research will allow the lithium-ion battery industry and electric vehicle sector to continue to expand and develop with a greater understanding of the potential hazards and ways to combat these issues,” Sun concluded.

Swiping touchscreens with your finger has become a dominant means of accessing information in many applications but there is at least one problem associated with this action. From an Oct. 2, 2015 news item on phys.org,

While touchscreens are practical, touchless displays would be even more so. That’s because, despite touchscreens having enabled the smartphone’s advance into our lives and being essential for us to be able to use cash dispensers or ticket machines, they do have certain disadvantages. Touchscreens suffer from mechanical wear over time and are a transmission path for bacteria and viruses. To avoid these problems, scientists at Stuttgart’s Max Planck Institute for Solid State Research and LMU Munich have now developed nanostructures that change their electrical and even their optical properties as soon as a finger comes anywhere near them.

A touchless display may be able to capitalize on a human trait which is of vital importance, although sometimes unwanted: This is the fact that our body sweats – and is constantly emitting water molecules through tiny pores in the skin. Scientists of the Nanochemistry group led by Bettina Lotsch at the Max Planck Institute for Solid State Research in Stuttgart and the LMU Munich have now been able to visualize the transpiration of a finger with a special moisture sensor which reacts as soon as an object – like an index finger – approaches its surface, without touching it. The increasing humidity is converted into an electrical signal or translated into a colour change, thus enabling it to be measured.

Phosphatoantimonic acid is what enables it to do this. This acid is a crystalline solid at room temperature with a structure made up of antimony, phosphorous, oxygen and hydrogen atoms. “It’s long been known to scientists that this material is able to take up water and swells considerably in the process,” explained Pirmin Ganter, doctoral student at the Max Planck Institute for Solid State Research and the Chemistry Department at LMU Munich. This water uptake also changes the properties of the material. For instance, its electrical conductivity increases as the number of stored water molecules rises. This is what enables it to serve as a measure of ambient moisture.

A sandwich nanomaterial structure exposed to moisture also changes its colour

However, the scientists aren’t so interested in developing a new moisture sensor. What they really want is to use it in touchless displays. “Because these sensors react in a very local manner to any increase in moisture, it is quite conceivable that this sort of material with moisture-dependent properties could also be used for touchless displays and monitors,” said Ganter. Touchless screens of this kind would require nothing more than a finger to get near the display to change their electrical or optical properties – and with them the input signal – at a specific point on the display.

Taking phosphatoantimonate nanosheets as their basis, the Stuttgart scientists then developed a photonic nanostructure which reacts to the moisture by changing colour. “If this was built into a monitor, the users would then receive visible feedback to their finger motion” explained Katalin Szendrei, also a doctoral student in Bettina Lotsch’s group. To this end, the scientists created a multilayer sandwich material with alternating layers of ultrathin phosphatoantimonate nanosheets and silicon dioxide (SiO2) or titanium dioxide nanoparticles (TiO2). Comprising more than ten layers, the stack ultimately reached a height of little more than one millionth of a metre.

For one thing, the colour of the sandwich material can be set via the thickness of the layers. And for another, the colour of the sandwich changes if the scientists increase the relative humidity in the immediate surroundings of the material, for instance by moving a finger towards the screen. “The reason for this lies in the storage of water molecules between the phosphatoantimonate layers, which makes the layers swell considerably,” explained Katalin Szendrei. “A change in the thickness of the layers in this process is accompanied by a change in the colour of the sensor – produced in a similar way to what gives colour to a butterfly wing or in mother-of-pearl.”

The material reacts to the humidity change within a few milliseconds

This is a property that is fundamentally well known and characteristic of so-called photonic crystals. But scientists had never before observed such a large colour change as they now have in the lab in Stuttgart. “The colour of the nanostructure turns from blue to red when a finger gets near, for example. In this way, the colour can be tuned through the whole of the visible spectrum depending on the amount of water vapour taken up,” stressed Bettina Lotsch.

The scientists’ new approach is not only captivating because of the striking colour change. What’s also important is the fact that the material reacts to the change in humidity within a few milliseconds – literally in the blink of an eye. Previously reported materials normally took several seconds or more to respond. That is much too slow for practical applications. And there’s another thing that other materials couldn’t always do: The sandwich structure consisting of phosphatoantimonate nanosheets and oxide nanoparticles is highly stable from a chemical perspective and responds selectively to water vapour.

A layer protecting against chemical influences has to let moisture through

The scientists can imagine their materials being used in much more than just future generations of smartphones, tablets or notebooks. “Ultimately, we could see touchless displays also being deployed in many places where people currently have to touch monitors to navigate,” said Bettina Lotsch. For instance in cash dispensers or ticket machines, or even at the weighing scales in the supermarket’s vegetable aisle. Displays in public placesthat are used by many different people would have distinct hygiene benefits if they were touchless.

But before we see them being used in such places, the scientists have a few more challenges to overcome. It’s important, for example, that the nanostructures can be produced economically. To minimize wear, the structures still need to be coated with a protective layer if they’re going to be used in anything like a display. And that, again, has to meet not one but two different requirements: It must protect the moisture-sensitive layers against chemical and mechanical influences. And it must, of course, let the moisture pass through. But the Stuttgart scientists have an idea for how to achieve that already. An idea they are currently starting to put into practice with an additional cooperation partner on board.

Dexter Johnson’s Oct. 2, 2015 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides some additional context for this research (Note: A link has been removed),

In a world where the “swipe” has become a dominant computer interface method along with moving and clicking the mouse, the question becomes what’s next? For researchers at Stuttgart’s Max Planck Institute for Solid State Research and LMU Munich, Germany, the answer continues to be a swipe, but one in which you don’t actually need to touch the screen with your finger. Researchers call these no-contact computer screens touchless positioning interfaces (TPI).

For the first time since I’ve started posting about Vancouver’s Café Scientifique there’s been a last minute change of speakers. It’s due to an addition to Dr. Kramer’s family. Congratulations!

So, Tuesday, April 28, 2015’s Café Scientifique, held in the back room of The Railway Club (2nd floor of 579 Dunsmuir St. [at Seymour St.], will be hosting a talk from a different speaker and on a different topic,

Ph.D candidate and Vanier Scholar, Kostadin Kushlev from the Department of Psychology at UBC presenting his exciting research. Details are as follows:

Always Connected: How Smartphones May be Disconnecting Us From the People Around Us.

Smartphones have transformed where and how we access information and connect with our family and friends. But how might these powerful pocket computers be affecting how and when we interact with others in person? In this talk, I will present recent data from our lab suggesting that smartphones can compromise how connected we feel to close others, peers, and strangers. Parents spending time with their children felt more distracted and less socially connected when they used their phones a lot. Peers waiting together for an appointment connected with each other less and felt less happy when they had access to their phones as compared to when they did not. And, people looking for directions trusted members of their community less when they relied on their phones for directions rather than on the kindness of strangers. These findings highlight some of the perils of being constantly connected for our nonvirtual social lives and for the social fabric of society more generally.

On looking up the speaker online, I found that the main focus of his research is happiness, from the University of British Columbia’s (UBC) Graduate and PostGraduate webpage for Kostadin Kushlev,

Research topic: Happiness and well-being
Research group: Social Cognition and Emotion Lab
Research location: UBC Vancouver, Kenny Building, 2136 West Mall
Research supervisor: Elizabeth Dunn

Research description
My research focuses on the emotional experience of people. The topics that I am currently investigating range from what gives (or takes away from) people’s experience of meaning in life to how people react to shame and guilt, and to what extent new technologies introduce stress and anxiety in our lives.

Home town: Madan
Country: Bulgaria

Given that the United Nations’ 2015 World Happiness Report (co-authored by UBC professor emeritus John Helliwell) was released on April 23, 2015, the same day that the Museum of Vancouver’s The Happy Show (Stefan Sagmeister: The Happy Show) opened, Kostadin Kushlev seems like a ‘happy’ choice for a substitute speaker just days later on April 28, 2015, especially since the original topic was ‘pain’.

Contrary to other transparent surfaces, the wings of the glasswing butterfly (Greta Oto) hardly reflect any light. Lenses or displays of mobiles might profit from the investigation of this phenomenon. (Photo: Radwanul Hasan Siddique, KIT)

I wouldn’t have really believed. Other than glass, I’ve never seen anything in nature that’s as transparent and distortion-free as this butterfly’s wings.

The effect is known from the smart phone: Sun is reflected by the display and hardly anything can be seen. In contrast to this, the glasswing butterfly hardly reflects any light in spite of its transparent wings. As a result, it is difficult for predatory birds to track the butterfly during the flight. Researchers of KIT under the direction of Hendrik Hölscher found that irregular nanostructures on the surface of the butterfly wing cause the low reflection. In theoretical experiments, they succeeded in reproducing the effect that opens up fascinating application options, e.g. for displays of mobile phones or laptops.

Transparent materials such as glass, always reflect part of the incident light. Some animals with transparent surfaces, such as the moth with its eyes, succeed in keeping the reflections small, but only when the view angle is vertical to the surface. The wings of the glasswing butterfly that lives mainly in Central America, however, also have a very low reflection when looking onto them under higher angles. Depending on the view angle, specular reflection varies between two and five percent. For comparison: As a function of the view angle, a flat glass plane reflects between eight and 100 percent, i.e. reflection exceeds that of the butterfly wing by several factors. Interestingly, the butterfly wing does not only exhibit a low reflection of the light spectrum visible to humans, but also suppresses the infrared and ultraviolet radiation that can be perceived by animals. This is important to the survival of the butterfly.

For research into this so far unstudied phenomenon, the scientists examined glasswings by scanning electron microscopy. Earlier studies revealed that regular pillar-like nanostructures are responsible for the low reflections of other animals. The scientists now also found nanopillars on the butterfly wings. In contrast to previous findings, however, they are arranged irregularly and feature a random height. Typical height of the pillars varies between 400 and 600 nanometers, the distance of the pillars ranges between 100 and 140 nanometers. This corresponds to about one thousandth of a human hair.

In simulations, the researchers mathematically modeled this irregularity of the nanopillars in height and arrangement. They found that the calculated reflected amount of light exactly corresponds to the observed amount at variable view angles. In this way, they proved that the low reflection at variable view angles is caused by this irregularity of the nanopillars. Hölscher’s doctoral student Radwanul Hasan Siddique, who discovered this effect, considers the glasswing butterfly a fascinating animal: “Not only optically with its transparent wings, but also scientifically. In contrast to other natural phenomena, where regularity is of top priority, the glasswing butterfly uses an apparent chaos to reach effects that are also fascinating for us humans.”

The findings open up a range of applications wherever low-reflection surfaces are needed, for lenses or displays of mobile phones, for instance. Apart from theoretical studies of the phenomenon, the infrastructure of the Institute of Microstructure Technology also allows for practical implementation. First application tests are in the conception phase at the moment. Prototype experiments, however, already revealed that this type of surface coating also has a water-repellent and self-cleaning effect.

Dr. Andrew Maynard’s May 20, 2014 article (Small Packages; A new case study on the health risks of nanotech doesn’t tell the whole story) for Slate magazine does much to calm any fears there might be in the wake of a recent case study about the consequences of handling nickel nanoparticles in the workplace,

… The report describes a chemist who developed symptoms that included throat irritation, nasal congestion, facial ﬂushing, and skin reactions to jewelry containing nickel, after starting to work with a powder consisting of nanometer-sized nickel particles. According to the report’s lead author, this is “case one in our modern economy” of exposure to a product of nanotechnology leading to an individual becoming ill.

…

… And this is why the case of the nickel nanoparticles above needs to be approached with some caution. Many people have an allergic skin reaction to nickel, and research has shown that inhaling nickel particles can cause people to become sensitized to the metal. It’s also well known that fine powders will become airborne more easily than coarse ones when they’re handled, and that the finer the powder you inhale, the more potent it is in your lungs. So it shouldn’t come as a surprise that handling nickel nanopowder in an open lab without exposure controls is not a great idea. In other words, the reported incident was more a case of bad exposure management than nanoparticle risk.

That said, the case does highlight the level of respect with which any new or unusual material should be treated. …

Reinforcing Andrew’s comments about nickel sensitivities, there’s a recent report about smartphones and metal sensitivities. From a May 21, 2014 article by Sarah Knapton for The Telegraph (UK), Note: A link has been removed,

If you have ever noticed swelling, redness, itching or blistering near your cheekbones, ears, jaw or hands, you may be allergic to your phone.

A new study suggests the nickel, chromium and cobalt found in common phones made by BlackBerry, Samsung and LG among others, can cause skin irritations.

Danish and US researchers found at least 37 incidents since 2000 where contact dermatitis was caused by mobile phones.

Here are links to and citations for the nickel case study and to the smartphone paper,

The nickel paper is behind a paywall and the smartphone paper is open access.

One comment, the smartphone literature search yielded a small sample, on the other hand, if there isn’t category for the problem, it might not get into reports and be studied.

Getting back to Andrew’s article, it is illuminating and frustratingly opaque (perhaps there was an editing issue?),

Over a couple of days in London last summer, I found myself mulling over a very similar question with a small group of colleagues. We were a pretty eclectic group—engineers, designers, toxicologists, business leaders, academics, policy wonks—but we had one thing in common: We wanted get a better handle on how dangerous realistic products of nanotechnology might be, and how these dangers might be avoided.

… Our approach was to imagine products based on engineered nanomaterials that were technologically feasible and would also have a reasonable chance of surviving a cut-throat economy—products like active food packaging labels that indicated the presence of contaminants; helium-filled balloons with solar cell skins; and materials templated from viruses to generate hydrogen and oxygen from water. We then tried to imagine how these plausible products could potentially release dangerous materials into the environment.

To our surprise, we struggled to come up with scenarios that scared us.

It sounds like this session was organized as a think tank. It would have been nice to know who organized it, who were their invitees, and what was their expertise. On that note, there is this about Andrew at the end of the Slate article,

Andrew Maynard is a leading expert on the responsible development and use of emerging technologies and is the director of the U-M [University of Michigan] Risk Science Center.

Having stumbled across Andrew many times over the years within the ‘nano blogosphere’ and having him kindly answer my amateurish questions about reading research, I feel confidence when reading his opinion pieces that he is well informed and has carefully considered not only questions I might ask but others as well.

While I might like to know more about that 2013 think tank session in London (UK), this section towards the end of the piece suggests that Andrew has not, in an excess of enthusiasm, thrown in his lot with some hype happy group,

… the case [nickel inhalation] does highlight the level of respect with which any new or unusual material should be treated. This was also one of the conclusions from those two days in London. Just because the risks of many nanotechnology products seem relatively small, doesn’t mean that we can afford to be complacent. There’s still the possibility that someone will create a particularly dangerous new material, or will use a material that seems safe in a dangerous way. As a society we need to be vigilant when it comes to advanced materials, whether they are branded with the nano insignia or not.

As for Knapton article and smartphone research, I haven’t come to any particular conclusions but I am going to keep an eye out for evidence, anecdotal or otherwise. A friend of mine, who sometimes suffers from skin sensitivities, just switched over to her first Blackberry.

All that’s needed is an oven, a microscope glass slide and a common, gel-like silicone polymer called polydimethylsiloxane (PDMS). First, drop a small amount of PDMS onto the slide. Then bake it at 70 degrees Celsius to harden it, creating a base. Then, drop another dollop of PDMS onto the base and flip the slide over. Gravity pulls the new droplet down into a parabolic shape. Bake the droplet again to solidify the lens. More drops can then be added to hone the shape of the lens that also greatly increases the imaging quality of the lens. “It’s a low cost and easy lens-making recipe,” Lee [ Steve Lee from the Research School of Engineering at Australian National University] says.

I’m still marveling over this image,

Caption: This photo shows a single droplet lens suspended on a fingertip. Credit: Stuart Hay. Courtesy: The Optical Society

For anyone who doesn’t know much about producing lenses and why these baked droplets could improve lives, the Optical Society news release provides some insight,

A droplet of clear liquid can bend light, acting as a lens. Now, by exploiting this well-known phenomenon, researchers have developed a new process to create inexpensive high quality lenses that will cost less than a penny apiece.

Because they’re so inexpensive, the lenses can be used in a variety of applications, including tools to detect diseases in the field, scientific research in the lab and optical lenses and microscopes for education in classrooms.

“What I’m really excited about is that it opens up lens fabrication technology,” says Steve Lee from the Research School of Engineering at Australian National University (ANU) …

…

Many conventional lenses are made the same way lenses have been made since the days of Isaac Newton—by grinding and polishing a flat disk of glass into a particular curved shape. Others are made with more modern methods, such as pouring gel-like materials molds. But both approaches can be expensive and complex, Lee says. With the new method, the researchers harvest solid lenses of varying focal lengths by hanging and curing droplets of a gel-like material—a simple and inexpensive approach that avoids costly or complicated machinery.

“What I did was to systematically fine-tune the curvature that’s formed by a simple droplet with the help of gravity, and without any molds,” he explains.

Although people have long recognized that a droplet can act as a lens, no one tried to see how good a lens it could be. Now, the team has developed a process that pushes this concept to its limits, Lee says.

…

The researchers made lenses about a few millimeters thick with a magnification power of 160 times and a resolution of about 4 microns (millionths of a meter)—two times lower in optical resolution than many commercial microscopes, but more than three orders of magnitude lower in cost. “We’re quite surprised at the magnification enhancement using such a simple process,” he notes.

“We put a droplet of polymer onto a microscope cover slip and then invert it. Then we let gravity do the work, to pull it into the perfect curvature,” Dr Lee said.

“By successively adding small amounts of fluid to the droplet, we discovered that we can reach a magnifying power of up to 160 times with an imaging resolution of four micrometers.”

The polymer, polydimethylsiloxane (PDMS), is the same as that used for contact lenses, and it won’t break or scratch.

“It would be perfect for the third world. All you need is a fine tipped tool, a cover slip, some polymer and an oven,” Dr Lee said.

The first droplet lens was made by accident. [emphasis mine]

“I nearly threw them away. [emphasis mine] I happened to mention them to my colleague Tri Phan, and he got very excited,” Dr Lee said.

“So then I decided to try to find the optimum shape, to see how far I could go. When I saw the first images of yeast cells I was like, ‘Wow!'”

Dr Lee and his team worked with Dr Phan to design a lightweight 3D-printable frame to hold the lens, along with a couple of miniature LED lights for illumination, and a coin battery.

The technology taps into the current citizen science revolution [emphasis mine], which is rapidly transforming owners of smart phones into potential scientists. There are also exciting possibilities for remote medical diagnosis.

Dr Phan said the tiny microscope has a wide range of potential uses, particularly if coupled with the right smartphone apps.

“This is a whole new era of miniaturisation and portability – image analysis software could instantly transform most smartphones into sophisticated mobile laboratories,” Dr Phan said.

“I am most able to see the potential for this device in the practice of medicine, although I am sure specialists in other fields will immediately see its value for them.”

Dr Lee said the low-cost lens had already attracted interest from a German group interested in using disposable lenses for tele-dermatology.

“There are also possibilities for farmers,” he said. “They can photograph fungus or insects on their crops, upload the pictures to the internet where a specialist can identify if they are a problem or not.”

That Lee created his first droplet by accident and almost threw it away echoes many, many other science stories. In addition to that age old science story, I love the simplicity of the idea, the reference to Isaac Newton, and the inclusion of citizen science.