21st Century Tech Bloghttp://www.21stcentech.com
Science, Technology and the FutureMon, 19 Mar 2018 20:45:35 +0000en-UShourly1https://wordpress.org/?v=4.9.4How Canadians Can Lower Their Carbon Footprint and Show The World How It Can Be Done: Part 1: The Carbon Problemhttp://www.21stcentech.com/canadians-carbon-footprint-show-world/
http://www.21stcentech.com/canadians-carbon-footprint-show-world/#respondMon, 19 Mar 2018 20:43:44 +0000http://www.21stcentech.com/?p=25170March 19, 2018 – This is the first in a series of postings that will look at the global challenge of climate change and how Canada and individual Canadians can help find solutions to regulate the carbon balance on our planet. I will begin in this first article to describe what challenge life on this planet faces in the coming centuries if we don’t re-establish carbon homeostasis, the balance that keeps us in Goldilocks conditions (not too hot, not too cold, just right) on most of Earth’s land masses.

In future postings, we’ll cover what forms of action are needed, where they are needed, and how they can be achieved to restore carbon balance. We will look at the tools, the policies and missing elements still required to ensure carbon remains in balance so life on this planet remains viable.

A reminder of just how out of whack things could become sits some 32 million kilometers away from our planet. It is our closest Solar System neighbour, Venus. Scientists believe at one time Venus, Earth’s near-twin in size and location, had oceans and an atmosphere capable of supporting the emergence of life. But a runaway greenhouse effect turned Venus into a hellish environment. Now Venusian conditions are not likely to happen here. But recognizing what a carbon imbalance can ultimately do to a planet cannot be underestimated.

The Carbon Problem Explained

All life on this planet is carbon-based. In fact, by weight, we are 20% carbon. There is carbon in the Earth’s crust. It’s not much, a mere 0.032% by weight for the lithosphere and outer mantle. That carbon likely came from the surface and got trapped in the rocks that make up the Earth’s crust. Calcium carbonate, commonly known as limestone, is probably the most noted example of lithospheric carbon. But other carbon trapped in the lithosphere is the kind of carbon we have learned to burn: coal, oil, and natural gas.

There is a finite amount of carbon here on Earth. Very little has been added since the initial formation of the planet some 4.5 billion years ago. The exceptions are the occasional carbonaceous meteorite that crashes to the surface or the ocean and adds incremental amounts of new carbon. This is the exception. The general rule is our planet isn’t producing more carbon. What we are doing is redistributing the carbon we have.

So where does carbon reside on Earth?

It is found in living things, the atmosphere, in water, and in the ground. Carbon homeostasis refers to the equilibrium established among these four different carbon environments.

In the period between the Devonian and Permian lies 60 million years in Earth history where environmental conditions on the surface led to a proliferation of plant life. We call this period of time between 360 and 299 million years ago the Carboniferous because of the carbon-rich deposits laid down when these plants died. No previous geologic period was as plant-rich. And from the detritus of the annual cycles of growth of these many plants, we have the fossil fuel deposits that humans discovered 300 million years later and recognized the energy potential held within them.

The Carboniferous Period in Earth history produced an abundance of plant life that laid down the type of carbon deposits depicted here. This sequestered carbon in the form of coal, oil and natural gas is now being put into Earth’s atmosphere upsetting the distribution balance of carbon and causing global warming.

Now carbon underground locked up in coal, oil, or natural gas, has little to no impact on the atmosphere. But if that carbon is harvested there is disruption to the carbon balance. The deficit in one carbon domain leads to an increase in another. Burn fossil fuels from the underground domain underground and deplete the element there, transfers the carbon to the air, or to materials humans produce that lock in carbon (concrete for example).

When we were only burning a little carbon the Earth had natural means to compensate for the additional amounts. Natural carbon sinks such as forests, grasslands, soil, and water absorbed the excess atmospheric carbon keeping the build-up of the element to levels that had little impact on climate conditions. But then the pace of burning carbon increased from the late 19th century to today where we are consuming billions of barrels of oil, trillions of cubic meters of natural gas, and hundreds of millions of tons of coal annually. The natural sinks that helped to rebalance the carbon can no longer do the job.

This is the problem in a nutshell, redistributed quantities of carbon formerly sequestered underground, now in our atmosphere. We have been the beneficiaries of carbon’s largess. Burning it has helped us create industry. Burning it has produced the fertilizers to increase crop production and feed the growing population of Earth. Burning it has produced wealth and improved the quality of life for much of humanity. But burning it, however, has led to unintended consequences, namely global warming.

Global warming isn’t a fiction despite climate change skeptics’ claims. The evidence of its existence is overwhelming. Both day and nighttime temperatures are rising. The warming atmosphere is more volatile. Storms are more intense. Polar sea ice, continental ice sheets, and alpine glaciers are melting. Variability in atmospheric conditions is becoming more extreme. Places that experienced periods of drought in the past are seeing the duration of dry periods increase. Rain is becoming more unpredictable, and often more intense.

The animals and plants here on Earth in the past have found ways to deal with the natural cycles of the over thousands of years. Evolution has been their adjusting mechanism. But evolution doesn’t work when catastrophic events occur as seen in evidence of mass extinctions in Earth’s past. These mass extinctions, scientists theorize, happened because of sudden changes to the surface and atmospheric equilibrium of the planet, from meteor strikes, or significant volcanic eruptions.

Now it is human activity creating a slower version of a similar catastrophe, not one that is instantaneous but rather slow moving as each decade sees the atmosphere warm a little bit more in correlation with the rising carbon imbalance. The analogy often depicted is that of a frog sitting in a pot of water being warmed on a stove top. At some point, that frog either jumps out or dies. Here on Earth, however, there is no place for the frog to jump to. And we, the humans, animals, and plants, are the frog in the pot. This is our problem. We have created it and we need to find solutions that do not impoverish us or further compromise the lives of the remaining animals and plants of the planet.

Carbon in the atmosphere is bound to oxygen, forming carbon dioxide, (CO2) the greenhouse gas that our human activity has caused to rise from 280 parts per million (ppm) in the mid-19th century to close to 410 ppm today.

]]>http://www.21stcentech.com/canadians-carbon-footprint-show-world/feed/0New XPrize Offers $10 Million to Develop a Real-Life Avatarhttp://www.21stcentech.com/xprize-offers-10-million-develop-real-life-avatar/
http://www.21stcentech.com/xprize-offers-10-million-develop-real-life-avatar/#respondSat, 17 Mar 2018 18:49:50 +0000http://www.21stcentech.com/?p=25165March 17, 2018 – The ANA Avatar XPrize is the latest in competitions announced by the XPrize organization. It joins a number of other prize offerings covering a wide range of fields from empowering children to take control of their own learning, turning carbon dioxide emissions into valuable products, to creating water from thin air. This makes 10 active XPrizes on the go with over $100 million U.S. at stake with each sponsored by a corporation. The ANA in the Avatar prize is All Nippon Airways. Registration to compete opened on March 12th and will remain open until October 31st. Registrants will have four years to develop the technology described on the website.

So what is expected of the winner?

To create a multipurpose avatar robotic presence to be used in a variety of real-world applications. These avatars could take many different forms to suit various scenarios including:

providing care – the avatar robot would be present to care for anyone regardless of distance.

disaster relief – the avatar robot would provide life-saving skills to remote areas where it may be too dangerous for a human to go.

multipurpose utility – the avatar robot would provide critical maintenance repairs where no human with the right skills was available.

States Peter Diamandis, the creator of the XPrize, “The ANA Avatar XPrize can enable creation of an audacious alternative [to humans] that could bypass these limitations allowing us to more rapidly and efficiently distribute skill and hands-on expertise to distant geographic locations where they are needed, bridging the gap between distance, time and cultures.”

In the guidelines, it states that the winning team will use state-of-the-art technologies to allow a robot to work with an untrained operator completing a series of tasks in a location at least 100 kilometers (62 miles) away. The execution of real-world uses will effectively demonstrate an ability to deliver critical care, emergency response, and other skills needed in health, the environment, food, education, housing, safety, and security.

Registrants will pay a $1,000 fee before October 31, 2018. A three-month qualifying round follows for teams to submit technical documentation and plans. Up to 25 teams will then be given the opportunity to move on to the main competition. These will be announced on April 30, 2019. The competition that follows will last 30 months with two milestones, one in April 2020 and April 2021. The two milestone prizes will be awarded $1 million each.

Three finalists will be chosen from the competition with each team asked to complete three series of realistic and complex activities, each over a 20 minute period. For each activity, the finalists will be rewarded points. The one with the most points will be awarded the grand prize worth $8 million.

Judging will look at a number of criteria:

ability to perform in less than 10 minutes after being trained.

low weight of less than 5 kilograms (12 pounds) for equipment worn by the remote avatar operator, with no limitation on the weight of the avatar robot or accessory equipment.

identify a board game and bring it to a patient and set up the game to play

listen to an announcement about a visiting doctor

bring the patient to the doctor’s location

read the patient report aloud and sign it

bring the patient back to his or her original location

take a blanket from the wheelchair, fold it, and put it on a shelf

In the Disaster Relief scenario, worth 200 points, the avatar will:

operate a shovel to load 20 kilograms (44 pounds) of debris into a wheelbarrow

push the wheelbarrow 10 meters (33 feet) to a loading area

use the shovel to unload the wheelbarrow

return to the original location

listen for a call for help and locate the source

walk forward to the location

pick up a coiled rope with a weighted end and then throw the rope towards the location of the sound

turn the avatar’s head and call for assistance

hand the unweighted end of the rope to an assistant

In the Multipurpose Utility, worth 300 points, the avatar will:

locate a set of instructions and read them aloud

pour a specified amount of liquid from one container to another

use a scoop to collect a specified amount of powder

unroll a plan on a table and weigh down the corners

use a protractor, straightedge, and mechanical pencil and draw on the plan two lines that intersect at 45 degrees

identify a broken plug-in component on an electrical control panel

walk 6 meters (20 feet) to a workbench and solder a wire on it

return to the control panel and replace the broken component

]]>http://www.21stcentech.com/xprize-offers-10-million-develop-real-life-avatar/feed/0Throw Weight of Individual Space Program Providers from 35 to Plus 100,000 Kilogramshttp://www.21stcentech.com/throw-weight-individual-space-program-providers/
http://www.21stcentech.com/throw-weight-individual-space-program-providers/#respondFri, 16 Mar 2018 20:34:11 +0000http://www.21stcentech.com/?p=25154March 16, 2018 – The Falcon Heavy, SpaceX mega-booster rocket is currently the heavyweight king in payload capacity, a measure of the ability to deliver satellites and human-manned command modules into Near and Deep Space. The Near Space is defined as low-Earth orbit (LEO). An intermediate space where many weather and telecommunication satellites are placed is described as Near Space where orbits can be matched to Earth’s rotation. We call the satellites placed in this area of space as operating geosynchronously. That is, their orbital velocity keeps them stationary over the same area of the planet. Getting payloads to geosynchronous orbits requires rockets to have geostationary transfer orbit capacity (GTO). Then there is trans-lunar insertion (TLI), the first of two Deep Space measures assigned to a rocket’s payload capacity. Further Deep Space capacity such as a flight to Mars is defined as Mars Orbital Insertion (MOI). These are the parameters by which we measure past, current and future launch providers. Surprisingly, although the future Space Launch System is touted by NASA as having greater payload capacity than the Saturn V, its predecessor, current specification data does not support that.

The list that follows includes past, current, and future launch systems in order of descending payload capacity. I may not have found all of them but this does give you a good idea of the past and current capacity of our planet to put things into orbit or Deep Space. Of the twenty-seven listed here, four are retired, and three are reusable. Also not included here are Stratolaunch Systems and Virgin Galactic, two companies that are in the process of completing air-launch systems capable of putting payloads into LEO.

BFR (SpaceX, United States) – in development – reusable – testing of systems to begin in 2019 based on Elon Musk’s latest predictions – 150,000 kilograms (330,000 pounds) to LEO, no data for GTO, 50,000 kilograms (110,000 pounds) for Deep Space missions including MOI and return to Earth. This is a rocket designed to hake humans to Mars and back.

Saturn V (NASA, United States) retired – expendable – 140,000 kilograms (310,000 pounds) to LEO, no data for GTO because the rocket was never used for this purpose and 48,600 kilograms (107,100 pounds) to trans-lunar insertion (TLI) for Apollo missions from 1969 to 1972.

Space Launch System (SLS) (Boeing, United Launch Alliance, Orbital ATK, Aerojet Rocketdyne for NASA, United States) – in development – largely expendable with the exception of booster rockets – first launches in different configurations between 2019 and 2022 – 130,000 kilograms (286,000 pounds) to LEO in the Block 2 configuration, no data for GTO, and 45,000 kilograms (99,000 pounds) to TLI.

]]>http://www.21stcentech.com/throw-weight-individual-space-program-providers/feed/0Too Bad We Didn’t Build the Technology to Upload Stephen Hawking’s Mind Before He Passedhttp://www.21stcentech.com/bad-build-technology-upload-stephen-hawkings-mind-passed/
http://www.21stcentech.com/bad-build-technology-upload-stephen-hawkings-mind-passed/#respondWed, 14 Mar 2018 18:13:12 +0000http://www.21stcentech.com/?p=25151March 14, 2018 – There are numerous obituaries and articles today on the Internet, print, and broadcast media about Stephen Hawking, the world’s most renowned theoretical physicist, who passed away early today. I remember reading “A Brief History of Time” in 1988 and marveling at how a man physically locked in a body wracked by amyotrophic lateral sclerosis (ALS) could be so visionary, seeing the cosmos and describing it in ways never stated before. Hawking was 76, an unusually long life for someone suffering from ALS. Maybe that was because of mind over matter as he continued to study and write about black holes, dark matter, relativity, the Big Bang, what came before the Big Bang, and the nature of space-time in governing our Universe. Hawking was continually working on developing a grand unifying theory to explain existence itself. Like his predecessor, Albert Einstein, he fell short of that goal. But the accumulation of his life’s work expanded our understanding of where we all came from, star stuff.

In the last couple of years of his life, Hawking warned humanity that we were on a path to extinction; that if we didn’t start planning a getaway to other worlds, we would soon consume Earth in its entirety leaving us with no home. He also feared the rise of unfettered artificial intelligence stating that thinking machines in the near future would hasten our extinction.

For Hawking whose body began failing him in his twenties, one could say he was continually in search of an out-of-world, out-of-body experience. It’s just too bad we hadn’t yet come up with the technology and science to upload all that incredible Hawking knowledge into a computer to continue his work.

In the past, I have written about entrepreneurs putting money into, and researchers pursuing the science and technology needed to upload a human brain. A project called the 2045 Initiative with a membership club of over 20,000 and the backing of billionaires, has been working on the development of both neural interfaces and robotics to create a human-machine, all the knowledge contained within our brains, our consciousness, but hosted in a non-biological carrier.

When the 2045 Initiative met in New York City in July of 2013 they set future milestones:

2020 – the creation of avatars controlled by a brain-computer interface

2025 – development of life support systems to keep a human brain alive while no longer in a body

2035 – the complete modeling of human consciousness and the transfer of it to an artificial carrier

2045 – the evolution of a new species, a human-machine, immortal, and capable of leaving Earth to find other worlds to explore

I’m not so sure Hawking would have wanted to be the first to try out the brain-computer interface and experience the world through an avatar. His death today came up two years short of the timetable described above.

We will miss the mind and the courage of this man.

]]>http://www.21stcentech.com/bad-build-technology-upload-stephen-hawkings-mind-passed/feed/0Microfinancing Meets the Challenges of Global Climate Change and Extreme Weather Eventshttp://www.21stcentech.com/microfinancing-meets-challenges-global-climate-change-extreme-weather-events/
http://www.21stcentech.com/microfinancing-meets-challenges-global-climate-change-extreme-weather-events/#respondTue, 13 Mar 2018 18:23:50 +0000http://www.21stcentech.com/?p=25145March 13, 2018 – I have been a member of Kiva.org since shortly after it was founded. Kiva was started by an American couple to provide a capital pool for the funding of entrepreneurs and those with the need for capital in Developing World countries. Today it even provides loans to Americans who need small amounts to fund projects and cannot access capital by other means. In the years since I became a member, I have lent over $1,600 U.S. in 64 loans spread over 28 countries. These loans helped entrepreneurs set up or restock general merchandise stores, buy seed and livestock, start a small taxi service, and help small collectives in many rural villages in countries in Africa, Asia, Central, and South America. 75% of my loans have been made to women who are the fastest growing entrepreneurial group on the planet. None of my loans of $25 each, pooled with others making similarly sized investments, has ever addressed overcoming a climate change or extreme weather issue. But now there are organizations using the micro-loans model to do just that.

Microfinancing clearly works when dealing with families, groups and individuals motivated to build a business or improve themselves with the assistance of repayable loans. But can it work to pull people back from the brink of a major natural disaster such as a hurricane, or to help implement adaptation or mitigation strategies to combat the impact of climate change?

VisionFund, a microloan charity currently lists on its website over 1.25 million borrowers with an average loan of $552 U.S. distributed over more than 30 countries in Africa, Asia, Latin America, the Middle East, and Eastern Europe, and a portfolio totaling more than $621 million. The organization focuses on the poorest communities where its loans can make the most significant impact, places where people live on less than $2 per day.

VisionFund’s latest foray is into climate insurance with the launch of a program for Africa and Asia. The target is to benefit more than 690,000 families, some four million people living in six low-income countries by providing them with climate insurance. They call the program the African and Asian Resilience in Disaster Insurance Scheme (ARDIS). ARDIS gives policyholders swift access to credit in the event of a climate shock. Microloans are managed by local financing partners and get quickly disbursed to limit disruption after a climate-related calamity. The presence of this type of quick capital makes all the difference to small farms and businesses hit by natural disaster disruptions. In these events, normal loan providers like local banks are usually slow to respond. And often so are local and state governments.

Michael Mithika, President, and CEO of VisionFund International states, “ARDIS uses an innovative financing structure making recovery lending scalable. This scalability means greater opportunities for more people to access emergency finance to restart businesses and restore incomes. We’ve already seen the benefits of recovery lending initiatives in sub-Saharan Africa in 2016/2017. “

Microfinancing, in the age of climate change, could make an enormous difference to the most vulnerable populations on the planet. It is good to see large institutions getting into this area of investment rather than just writing cheques and claiming charitable tax deductions in the aftermath of disasters. It also brings global capital markets front and centre into these two important realms: microloans, and insurance risk coverage. The initiative has set a target of 1% of the G7 goal to insure vulnerable people, in Developing World countries, giving them access to instant capital when climate change creates the need.

Flooding like this which occurred in August of 2017, is chronic today across South Asia. Local officials continue to ignore the dangers pursuing development plans that increase flood risk and flood-related deaths as climate patterns produce more destructive weather events. (Image credit: Anupam Nath, Associated Press)

]]>http://www.21stcentech.com/microfinancing-meets-challenges-global-climate-change-extreme-weather-events/feed/0Exceeding the Speed of Light Possible in a Quantum Experimenthttp://www.21stcentech.com/exceeding-speed-light-quantum-experiment/
http://www.21stcentech.com/exceeding-speed-light-quantum-experiment/#respondTue, 13 Mar 2018 15:01:46 +0000http://www.21stcentech.com/?p=25139March 13, 2018 – Since 2000 we have known that light itself doesn’t maintain a constant speed in a vacuum. Experiments back then, at Princeton’s NEC Laboratory, used lasers to produce faster-than-light-speed pulses by passing a beam through a specially constructed chamber containing cesium gas. A 3-microsecond pulse of light which normally would take 0.2 nanoseconds to make it from one end of the chamber to the other, emerged 62 nanoseconds earlier than if it had passed through a vacuum. The phenomenon observed was called anomalous dispersion and attributed to the effect of the cesium gas within the chamber. And what it showed is that light can move faster than the supposed speed limit of approximately 300,000 kilometers (186,000 miles) per second.

At the time of the release of the Princeton Lab findings, the head researcher for the experiment, Dr. Liujun Wang, stated that “Our experiment shows that the generally held misconception that nothing can move faster than the speed of light, is wrong. Einstein’s Theory of Relativity still stands, however, because it is still correct to say that information cannot be transmitted faster than the vacuum speed of light.”

But hold on. Enter quantum physics. In experimental results published on February 8, 2018, in the Journal of Physical Review Letters, in an article entitled, “Two-Way Communication with a Single Quantum Particle,” two quantum physicists, Jinyang Liang and Lihong V. Wang, from the Unversity of Vienna, demonstrate that quantum systems can surpass the speed limit of light.

Their experiment involves the exchange of a single quantum particle (a photon) by two individuals at the same time with both in receipt of the results in half the normal time of a transmission traveling at the speed of light.

Why is it even possible to exceed the speed of light by double? Because of quantum superposition. That is the single photon each participant in the experiment sends gets canceled or transposed at the same time. The end result, the single photons end up in two places simultaneously.

Is there a practical application for this discovery? Theoretically, this technique could be used for the sending and receiving of secure communications where no third-party could eavesdrop on messages sent and received.

If you are not familiar with superposition, it is a strange phenomenon at the quantum scale. We can observe superposition when we throw two pebbles in a pond and watch the waves created spread in rings until they intersect and cancel out each other.

In quantum states, the analogy of waves on a pond is applicable. The quantum particle strikes the pond surface in one state which then becomes a wave, a series of ripples, a separate state. The ripples move outward and may encounter other ripples which effectively cancel out the wave. What happens to the quantum particle? It still exists but it has undergone a change of states. Thus it is both particle and wave, existing in different states simultaneously.

Some argue that in the experiment described above that the quantum particle in our physical world never really moves because it already exists in the quantum world in two places or two states. Yet we know in this experiment that individual particles have been both fired and received.

Of course, this brings up the idea of humanity at some point being able to travel at speeds faster than light, bringing unreachable stars closer to us and making the world of Star Trek no longer science fiction? We have a while to go before anyone can turn a single photon traveling faster than light into a starship doing the same.

This artist’s impression of a spaceship jumping to light speed and beyond is a convention used by science fiction writers to allow us to travel to the distant stars. Reality may be far different (Image credit: NASA/Glenn Research Center)

]]>http://www.21stcentech.com/exceeding-speed-light-quantum-experiment/feed/0Materials Engineering Revolution on the Horizon States Peter Diamandishttp://www.21stcentech.com/peter-diamandis-talks-materials-revolution/
http://www.21stcentech.com/peter-diamandis-talks-materials-revolution/#respondMon, 12 Mar 2018 19:24:58 +0000http://www.21stcentech.com/?p=25125March 12, 2018 – Materials science is a field of both engineering and science that seeks new material discoveries and new ways to use existing materials. In his latest e-mail blast Peter Diamandis, of XPrize fame, describes how the field is converging with other areas in science and engineering in unexpected and exciting ways. Tools like the Materials Genome Project apply machine learning to the subject, and new engineering fabrication techniques allow for precision material builds, atom by atom, creating out of old materials a whole new generation of new applications. In this posting I have taken Diamandis’ words and added a few of my own to describe the driving forces within material science today and tomorrow. Enjoy the read.

Over 70 years ago, John Bardeen, Walter Brattain, and William Shockley sparked the semiconductor revolution with the fabrication of the first transistor. This fundamental technology now powers every aspect of science, innovation, and society. Projections put a $500 billion revenue tag on the semiconductor industry in 2018.

Nearly all products today rely on semiconductor materials science and its mightiest outcome, the transistor, which plays a decisive part in global computation, artificial intelligence (AI), and big data. The Economist magazine predicts that AI-related products and innovations will add nearly $16 trillion to GDP by 2030.

“Data to this century is what electricity was in….previous generations,” states Omkaram Nalamasu, Senior Vice President and Chief Technology Officer at Applied Materials. “Today, [we generate] something like 230 million tweets per day, 300 billion emails are sent per day, and about a hundred terabytes of data are loaded on Facebook every day. This pales in comparison to what is going to happen in the next five years. The data rate growth is about 80 percent.”

Worldwide data storage capacity is estimated to be around 6 zettabytes by 2020, each byte powered by semiconductor materials science. Exponential network and computation technologies including AI, the Internet of Things (IoT), the blockchain, autonomous vehicles, and the Internet itself are expected to generate orders of magnitude more data over the next five years than our current global storage capacity. The IoT network alone, projected to consist of over 50 million devices in 2020, will generate 600 zettabytes of information. That’s 100 times our current storage capacity of 6 zettabytes.

Once collected, how will we process and make sense of this data? The answer: advances in computational materials. It is materials breakthroughs, both in today’s semiconductor technologies and tomorrow’s quantum computers, that will be necessary to meet growing data storage and computation needs. That means breakthrough materials to manage it all. The exponential nature of Moore’s law, therefore, is converging with breakthroughs in materials discovery and production.

Machine Learning & Materials Discovery

Traditional materials science involves costly, in-lab iteration and theory-based guesswork. But by harnessing the power of supercomputers, quantum mechanics, and importantly, machine learning, materials engineers will be able to do in hours what used to take weeks, months or years.

To contextualize this acceleration, let’s look at Thomas Edison’s invention of the light bulb. Edison required a bulb with low heat production, low power consumption, and a long-lasting, light-emitting material. With intuition and empirical data as his guide, Edison set out testing over 1,600 materials. After 14 months of testing and tinkering, he settled on a carbon-coated cotton thread. Fast forward 30 years to the introduction of the tungsten filament and the market for Edison’s carbonaceous-cotton filament is wiped out. So Edison’s 1,600 experiments and guesses produced what ultimately was a sub-optimal material solution that readily was replaced within three decades.

But today materials engineers can work with materials discovery software to analytically derive and virtually test optimal new materials for specified applications. “Materials engineering is the ability to detect a very small particle — something like a 10-nanometer particle on a 300-millimeter wafer,” explains Nalamasu. “That is really equivalent to finding an ant in the city of Seattle.”

Thanks to machine learning and exponential leaps in computation and quantum theory, the high price of guesswork involved in materials discovery is rapidly being phased out in favor of analytical and numerical methods. The materials scientist of tomorrow will be able to specify desired properties of materials and with basic processing parameters, and a quantum-computer based machine learning program, return the optimal material composition required for a specific application. And materials innovation will lead to exponentially more materials innovations. Remember: supercomputers (and soon quantum computers) are the lineage of the computational materials disruption that has been taking place over the past 70 years.

Today’s material scientists and engineers are creating new materials and new configurations of old ones building structures atom by atom.

Augmented Reality/Virtual Reality and Materials Science

Think back to the state-of-the-art mobile phone in the 1980s, big, bulky, and reserved for Wall Street executives. Phones ran upwards of $4,000 and weighed over 5 kilos (a dozen pounds). These phones could only be used for calls and lacked any memory to store contact information. “If you were to build a smartphone in 1980,” Nalamasu explains, “that would cost something like $110 million. It would be about 14 meters [about 45 feet] tall, and it would require about 200 kilowatts of energy… that’s the power of materials engineering.”

Today, thanks to materials science breakthroughs in semiconductor technology, touchscreen displays, and batteries, children and executives alike can access smartphones with a 6+ inch Organic Light Emitting Diode (OLED) touchscreen displays, 256+ gigabytes of storage, integrated 1080 pixel cameras, and endless other features. By the end of 2018, we’ll have over 2.5 billion smartphones like what I have described above in use according to projections.

Now let’s extrapolate the user interface moment for mobile devices allowed by materials science breakthroughs to the current state of augmented reality (AR) and virtual reality (VR).

Today, bulky, expensive, and hard-to-use VR headsets dominate the industry. These headsets are ripe for precisely the same types of materials breakthroughs that enabled the rise of the smartphone from its ancestor the bulky mobile phone.

Some of the materials breakthroughs anticipated to come in and accelerate VR technology:

OLED display advances that allow headset resolution to increase from 500 pixels per inch to 3,000+ pixels per inch

Optics to move from today’s traditional structures to thin films on the order of nanometers in thickness

Battery advances to power the large energy requirements of running high-quality graphics

The future of AR and VR lies in sleek, slim, lightweight, and beautiful headsets that seamlessly integrate with our day-to-day lives. Thanks to coming materials advances, our future devices will be cheaper, thinner, faster and more powerful.

]]>http://www.21stcentech.com/peter-diamandis-talks-materials-revolution/feed/0What is the Difference Between Virtual and Augmented Reality?http://www.21stcentech.com/difference-virtual-augmented-reality/
http://www.21stcentech.com/difference-virtual-augmented-reality/#respondMon, 12 Mar 2018 15:50:30 +0000http://www.21stcentech.com/?p=25058March 12, 2018 – A reader made me aware of a site that writes about the differences between things. It is called Difference.guru. The editorial team of writers here likes to tackle a wide range of subjects explaining in a concise and easy to understand manner what the difference is between subjects like quantum and current computer systems, or mechanical versus electrical engineering, to name just two. In this posting, they have tackled the subject of differences between virtual and augmented reality. For many, the two are often confused. I hope you find these explanations useful and enjoy the read. Let me know in comments if you would like to see more of these types of postings.

When it comes to computer-generated simulations, virtual reality (VR) and augmented reality (AR) resonate well with the tech-savvy market. The experience of these two types of artificial environment is easily gaining traction even to those not too familiar with modern technology. Some people though, mistake one for a the other and cannot clearly distinguish between the two. Let us help you understand the differences.

A summary is a good starting point.

Virtual Reality

Shows an entirely virtual world

Difficult to differentiate between what is real and what is not real

Achieved with the use of a head-mounted display, closed visors, helmet or goggles

Augmented Reality

Shows a mix of virtual world and real world

Can interact and distinguish between the virtual and real world

Achieved with the use of clear visors or a smartphone

A virtual reality ride.

VR is a totally immersive, lifelike experience of a three-dimensional environment achieved through computer-generated simulation. It provides a make-believe environment, often transporting the user somewhere else and completely changing their perceived present state. The experience also gives users the ability to manipulate objects or perform a range of actions.

An augmented reality application.

AR expands real-world environments using computer-generated perception to supplement a person’s current state of presence. AR takes a current scenario and layers enhancements on it. The digital components add to the real world experience but still let users distinguish between what is real and what is not.

VR vs AR

VR and AR are both about altering the perception of visual spaces. The main difference between them is defined by the user’s overall experience and by the technology deployed.

User Experience

VR is about providing a true-to-life virtual experience. It provides a user with an entire simulation of their present space. With VR it is difficult to distinguish between what is real and what is not real.

AR, on the other hand, takes digital information and integrates it into a user’s present environment or space. Here, one can clearly distinguish between the virtual world and the real world.

Gadgets Used

In order to achieve the VR experience, users view displays through closed visors, helmets or goggles. These devices deprive a user of access to other stimuli, often in terms of visual and auditory cues. This creates the full effect of an imaginary reality and the feeling of experiencing a simulated world firsthand.

AR, on the other hand, uses clear visors or smartphones and downloaded apps that take advantage of the phones onboard global positioning system (GPS) capability. This determines the user’s current location and enables the AR to interact with he or she feeding digital components on screen or in the visor display to enhance what is being viewed in the real environment.

]]>http://www.21stcentech.com/difference-virtual-augmented-reality/feed/0Can Third-hand Smoke Increase Cancer Risk?http://www.21stcentech.com/third-hand-smoke-increase-cancer-risk/
http://www.21stcentech.com/third-hand-smoke-increase-cancer-risk/#respondSun, 11 Mar 2018 17:28:50 +0000http://www.21stcentech.com/?p=25117March 11, 2018 – We know that there is a direct link between smoking cigarettes and lung cancer. We also know that second-hand smoke can cause cancer. But what the heck is third-hand smoke (THS)? And why should we be concerned about its potential cancer risk?

Researchers at the Lawrence Berkeley National Laboratory in California describe THS as the “toxic residues that linger on indoor surfaces and indust long after a cigarette has been extinguished.”

Exposure to THS was previously not well understood. But the work at the Berkeley Lab shows that indoor surfaces can produce emissions containing smoke residue that combines with other pollutants to form hazardous compounds. Dust exposed to THS can also go airborne where it can be ingested into lungs. And unlike second-hand smoke, THS pollution can last for long periods of time.

Nicotine released during smoking adds to the lethal mix that gets deposited on room surfaces. Absorption isn’t limited to airborne ingestion. It can also be absorbed transdermally by non-smokers contacting these surfaces.

The studies at Berkeley Labs were done with mice and although THS exposure didn’t lead to the development of cancerous tumors in adult test animals but it did so to infant mice who have 40 weeks showed increased incidence of lung cancer. For adult mice exposed to THS over a prolonged period, it correlated to visible damage to internal organs, negative weight loss, delayed healing of skin wounds, and changes to male reproductive cells.

The concern raised by the Berkeley Lab study relates to non-smokers, particularly children, who are exposed to greater health risk in indoor environments where current or past smokers live. The study suggests even low doses of THS represents a long-term health hazard. This may also explain why homes where parents only smoke outside still show higher levels of nicotine and other smoke-related volatiles compounds inside.

The results of the study appeared in on February 28, 2018, in the edition of Clinical Science. a peer-reviewed journal offering multi-disciplinary coverage and clinical perspectives focused on human health.

Third-hand smoke contains the same chemicals found in second-hand smoke from a cigarette. Some of these chemicals interact with molecules from the air to create a toxic mix that includes potential cancer-causing compounds which could lead to tumorigenesis in mice. (Image Credit: Antoine Snijders, Jian-Hua Mao, and Bo Hang/Berkeley Lab)

The Berkeley Lab study follows previous research reported by the Mayo Clinic and American Heart Association linking many childhood illnesses to THS including:

Decontamination is not easy because THS doesn’t just land on furniture. It gets into floors, carpeting, drywall, and insulation. It can even be picked up by pets in fur and feathers. Short of an industrial cleaning which may or may not rid a home of THS, a home exposed to cigarette smoke may never rid itself of the problem. And if you are a smoker even switching to e-cigarettes to reduce the risk to your potential cancer exposure as well as your family’s, still contributes to THS because vapours produced by these substitutes contain volatile chemicals.

The only real solution is never to smoke at all. But if you must, then never smoke in the home or in a car. Never smoke near children or pets. And ensure that your clothes are washed separately from those of your children and other family members to ensure that THS residue is not transferred.

My dad (seen in the picture below when he was much younger) was a cigarette smoker consuming up to three packs of Camels unfiltered daily. He smoked from his teenage years until he was almost fifty. Finally, he went cold turkey.

He traveled a lot. His clothes and car always smelled of smoke. And when he was home I frequently became sick with different respiratory infections (bouts of tracheitis, bronchitis, and even pneumonia).

Now that I look back I am pretty certain my respiratory sensitivity was linked to second-hand smoke and THS. On a sadder note 34 years after my father stopped smoking, he died from a combination of heart disease (brought on by smoking) and lung cancer.

]]>http://www.21stcentech.com/third-hand-smoke-increase-cancer-risk/feed/0China Doubling Down on Greening its Economyhttp://www.21stcentech.com/china-doubling-greening-economy/
http://www.21stcentech.com/china-doubling-greening-economy/#respondSat, 10 Mar 2018 19:49:32 +0000http://www.21stcentech.com/?p=25113March 10, 2018 – In an article appearing on the Bloomberg website yesterday, Jeff Kearns, Hannah Dormido, and Alyssa McDonald described how the Chinese government is going green at breakneck speed. It seems only fair since the previous four decades in China’s history turned the nation into the world’s largest polluter. In 2015 Berkeley Earth, an independent research group estimated that air pollution in China contributed to 1.6 million deaths annually.

As I have studied China’s progress on the pollution issue, it has been apparent that the Chinese people are becoming more critical of their government’s inability to keep the air clean enough to not taste it. It may sound like I’m exaggerating, but on bad days in places like Shanghai and Beijing, the amount of pollutants from coal-fired power plants, home heating, cooking, and transportation create air that contains fine particulate matter greater than 2.5 microns in size. When you sample the air with your tongue you can taste the sulfur dioxide, nitrogen oxide, and suspended carbon. Over much of eastern China, from Mongolia to the Yangtze River basin, and to points south as far as Shenzhen and Hong Kong, the air is making Chinese people sick.

On bad winter days, concentrations of pollutants with particulate matter greater than 2.5 microns, create very unhealthy air. This image shows very unhealthy air (the darker the red the worse the air) over much of the country on January 31, 2018, (Source: Berkeley Earth)

China’s war on air pollution includes the single largest internal carbon market on the planet, and a significant push to get the Chinese people into electric vehicles. China is also the largest installer of solar panels. The country is also investing in hydrogen as a clean fuel and energy source. And in an interesting development, reducing its industrial capacity and the demand for coal, steel, aluminum, and paper in an effort to reduce carbon emissions. The turnaround is taking hold with improvements to air quality in Beijing (emissions are a third lower than in 2015). In other air polluted cities, the reduction in emissions is about 10% over the same period.

Has this slowed China’s economic growth? So far, not so much as the pivot to renewable energy production and electric cars picks up the slack from a decline in traditional heavier industries.

This picture of Beijing on December 1, 2015, shows the extent of China’s air pollution problem in and around its major cities. Today air days like this are diminishing as Beijing pollutants have declined by a third. (Photo by Li Feng/Getty Images)