Thursday, August 10, 2017

Friday Thinking 11 August 2017

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.) that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Risk aversion, weak customer focus, and siloed mind-sets have long bedeviled organizations. In a digital world, solving these cultural problems is no longer optional.

Too often, management writers talk about risk in broad-brush terms, suggesting that if executives simply encourage experimentation and don’t punish failure, everything will take care of itself. But risk and failure profoundly challenge us as human beings. As Ed Catmull of Pixar said in a 2016 McKinsey Quarterly interview, “One of the things about failure is that it’s asymmetrical with respect to time. When you look back and see failure, you say, ‘It made me what I am!’ But looking forward, you think, ‘I don’t know what is going to happen and I don’t want to fail.’ The difficulty is that when you’re running an experiment, it’s forward looking. We have to try extra hard to make it safe to fail.”

While lots of attention is directed toward identifying the next great start-up, the defining tech-industry story of the last decade has been the rise of Apple and Google. In terms of wealth creation, there is no comparison. Eight years ago, neither one of them was even in the top 10 most valuable companies in the world, and their combined market value was less than $300 billion. Now, Apple and Alphabet (Google’s parent company) have become the two most valuable companies, with a combined market capitalization of over $1.3 trillion. And increasingly, these two behemoths are starting to collide in various markets, from smartphones to home-audio devices to, according to speculation, automobiles.

But the greatest collision between Apple and Google is little noticed. The companies have taken completely different approaches to their shareholders and to the future, one willing to accede to the demands of investors and the other keeping power in the hands of founders and executives. These rival approaches are about something much bigger than just two of the most important companies in the world; they embody two alternative models of capitalism, and the one that wins out will shape the future of the economy.

to create a system of lifelong learning from cradle to grave involves moving away from the model in which all investment in learning is concentrated on the earliest years and the expectation that the skills developed then will last a lifetime. On top of that, lifelong learning beyond school commands small budgets, so it is unsurprising that so little structural change is evident.

Lifelong learning is not tidy and generates powerful incidental benefits. This is because learning leaks. Skills and aptitudes generated in one context are applied elsewhere. Britain's Ford Motor Company provided a clear example of this in the 1980s. It agreed at the end of a trade bargaining pay round to allocate 0.3% of its wage bill to a scheme, jointly managed at plant level by managers and blue and white collar unions, to support staff with learning outside of company training. Workers learned to drive, to plaster walls, strengthened their maths, learned Spanish and took Open University degree courses. They took the skills they developed for pleasure back into the workplace. The firm found that absenteeism rates dropped, demarcation disputes on the introduction of new procedures fell back, retention rates improved and the major bi-annual pay strikes symbolic of poor labour relations came to an end. Investing in learning for pleasure improved the bottom line

The jobs market is well into the 21st century. So why isn’t our education system?

Today’s jobs are vastly different than they were a generation ago. All of us, from Gen Zers to Boomers, are facing a working world that is more changeable and unpredictable than ever.

The days of working for 40 years at one job and retiring with a good pension are gone. Now the average time in a single job is 4.2 years, according to the US. Bureau of Labor Statistics. What’s more, 35% of the skills that workers need — regardless of industry — will have changed by 2020.

That rapid pace of change in jobs and skills means there’s a growing demand to update skills as well. According to a new report on workforce re-skilling by the World Economic Forum, one in four adults reported a mismatch between the skills they have and the skills they need for their current job.

Here’s the problem in a nutshell: the job opportunities that are available today are 21st-century jobs. But the way most people perform these jobs is still stuck in the previous century. As is the way our society is training and educating people.

Two possible visions of our future are competing for our attention: an Anthropocene desert of homogenised mongrels and a virtual supercontinent teeming with new species

WELCOME to the New Pangaea, a virtual supercontinent created by globalised human society. Able to hitch-hike on boats and planes, land species are no longer constrained by the oceans and can turn up anywhere and everywhere. Does that excite or appal you?

If the political world is divided between the globalisers and the localisers, so too is environmental thinking. And never more so than in these two compelling tracts.

In Inheritors of the Earth, ecologist Chris Thomas says that we are witnessing a virtual recreation of the single continent that dominated the planet until 175 million years ago. The subtitle to his book invites us to celebrate how “nature is thriving”, rather than buckling under the strain, with extinctions more than compensated for by a sudden upsurge in evolution, driven by globetrotting migrant species.

On the other side of the environmental aisle is Confessions of a Recovering Environmentalist, a series of touchingly written, but deeply pessimistic essays. Here, former eco-activist Paul Kingsnorth retreats into a world of nativist angst, offering an extreme version of the environmental longing to protect what is local, whether it is an endangered species or a traditional way of living. He mourns “the breaking of the link between people and places”.

Both authors have been on a long road. In 2004, as a young ecologist, Thomas made front-page news for a prediction that up to a third of species would die out due to climate change. He stands by that apocalyptic forecast, but now reckons the plus side is even bigger. While most ecologists bemoan the sixth great extinction in the planet’s history, Thomas says we are also “on the brink of a sixth major genesis of new life”.

This is a very interesting article - a longish read - but well worth it for anyone interested in evolution.

As scientists speculate what kind of life might exist on other worlds, a provocative idea is taking hold: that alien life, unlike anything we know, might already exist here on Earth. The idea is that life might have arisen two or more times on our planet – not just once, as long assumed. Our form of life came to dominate, while other forms receded into the corners. This ‘shadow biosphere’ would be difficult to detect, since it might not contain DNA, proteins or the other molecules that we rely on to detect life.

The ctenophore’s brain suggests that, if evolution began again, intelligence would re-emerge because nature repeats itself

the ctenophore represents an evolutionary experiment of stunning proportions, one that has been running for more than half a billion years. This separate pathway of evolution – a sort of Evolution 2.0 – has invented neurons, muscles and other specialised tissues, independently from the rest of the animal kingdom, using different starting materials.

This animal, the ctenophore, provides clues to how evolution might have gone if not for the advent of vertebrates, mammals and humans, who came to dominate the ecosystems of Earth. It sheds light on a profound debate that has raged for decades: when it comes to the present-day face of life on Earth, how much of it happened by pure accident, and how much was inevitable from the start?

A longish article about the emerging understanding of the gene pool as flows of DNA into and out of life forms. Worth the read. Could it be that ‘junk DNA’ represents a pantry of handy possibilities - that flow in and out according to environmental contingencies and available microbiome?

Species gain and shed startling amounts of DNA as they evolve, and even genomes that look stable churn furiously. What does it mean?

“Why would an onion have five times more DNA than we have? Are they five times more clever?”

Of course, it wasn’t just the onion that upended assumptions about a link between an organism’s complexity and the heft of its genetic code. In the first broad survey of animal genome sizes, published in 1951, Arthur Mirsky and Hans Ris —pioneers in molecular biology and electron microscopy, respectively — reported with disbelief that the snakelike salamander Amphiuma contains 70 times as much DNA as a chicken, “a far more highly developed animal.” The decades that followed brought more surprises: flying birds with smaller genomes than grasshoppers; primitive lungfish with bigger genomes than mammals; flowering plants with 50 times less DNA than humans, and flowering plants with 50 times more; single-celled protozoans with some of the largest known genomes of all.

Why, for instance, do some genomes contain very little noncoding DNA — also, controversially, often called “junk DNA” — while others hoard it? Does all this clutter — or lack of it — serve a purpose?

This past February, a tantalizing clue arose from research led by Aurélie Kapusta while she was a postdoctoral fellow working with Cedric Feschotte, a geneticist then at the University of Utah, along with Alexander Suh, an evolutionary biologist at Uppsala University in Sweden. The study, one of the first of its kind, compared genome sequences across diverse lineages of mammals and birds. It showed that as species evolved, they gained and shed astonishing amounts of DNA, although the average size of their genomes stayed relatively constant. “We see the genome is very dynamic, very elastic,” said Feschotte, who is now at Cornell University.

This may be a premature signal - but ultimately one that societies will have to grapple with - Yes we’ve all seen Minority Report - and predicting crime in advance may be impossible - but solving crimes after the fact more quickly and with more evidence may be very plausible. The dilemma of privacy and transparency.

This sounds a little like Minority Report to us. China is looking into predictive analytics to help authorities stop suspects before a crime is committed.

According to a report from the Financial Times, authorities are tapping on facial recognition tech, and combining that with predictive intelligence to notify police of potential criminals, based on their behaviour patterns.

Guangzhou-headquartered Cloud Walk has been trialing its facial recognition system that tracks a person's movements. Based on where someone goes, and when, it hands them a rating of how at risk they are of committing a crime.

For instance, someone buying a kitchen knife is not suspicious. But if the same person goes and gets a hammer and a sack later, that person's suspicious rating goes up, a Cloud Walk spokesperson told the FT.

The company's software is tapped into the police database in over 50 cities and provinces, and can flag up suspicious characters live.

This endeavor is an exemplar of open science - and specifically about neuroscience. For anyone interested in cutting edge research and science about cognition, mind, neuroscience this is a must know.

As founder and director of the MIND group, I consider myself to be neither a junior and senior member. Therefore, I have not contributed a target paper or a commentary. If anything, my contribution lies in the choice and selection of authors and in the work, together with my collaborator Jennifer Windt, of bringing this project to completion.

This is an edited collection of 39 original papers and as many commentaries and replies. The target papers and replies were written by senior members of the MIND Group, while all commentaries were written by junior group members. All papers and commentaries have undergone a rigorous process of anonymous peer review, during which the junior members of the MIND Group acted as reviewers. The final versions of all the target articles, commentaries and replies have undergone additional editorial review.

Besides offering a cross-section of ongoing, cutting-edge research in philosophy and cognitive science, this collection is also intended to be a free electronic resource for teaching. It therefore also contains a selection of online supporting materials, pointers to video and audio files and to additional free material supplied by the 92 authors represented in this volume. We will add more multimedia material, a searchable literature database, and tools to work with the online version in the future. All contributions to this collection are strictly open access. They can be downloaded, printed, and non-commercially reproduced by anyone.

This is a wonderful 17 min TED Talk discussing consciousness - which is distinct from intelligence - but involves a great deal of projective - predictive proception. Worth the view.

Right now, billions of neurons in your brain are working together to generate a conscious experience -- and not just any conscious experience, your experience of the world around you and of yourself within it. How does this happen? According to neuroscientist Anil Seth, we're all hallucinating all the time; when we agree about our hallucinations, we call it "reality." Join Seth for a delightfully disorienting talk that may leave you questioning the very nature of your existence.

Barrett, a neuroscientist at Northeastern University, is the author of How Emotions Are Made. She argues that many of the key beliefs we have about emotions are wrong. It’s not true that we all feel the same things, that anyone can “read” other people’s faces, and it’s not true that emotions are things that happen to us.

The classical view assumes that emotions happen to you. Something happens, neurons get triggered, and you make these stereotypical expressions you can’t control. It says that people scowl when they’re angry and pout when they’re sad, that everyone around the world not only makes the same expressions, but that you’re born with the capacity to recognize them automatically.

In my view, a face doesn’t speak for itself when it comes to emotion, ever. I’m not saying that when your brain constructs a strong feeling that there are no physical cues to the strength of your feeling. People do smile when they’re happy or scowl when they’re sad. What I’m saying is that there’s not a single obligatory expression. And emotions aren’t some objective thing, they’re learned and something that our brains construct.

For a lovely, accessible series of 9 short video where Barrett explains the findings of her research here is a playlist.

Lisa Feldman Barrett is a University Distinguished Professor of Psychology at Northeastern University, where she focuses on the study of emotion. She is director of the Interdisciplinary Affective Science Laboratory

Now thinking about consciousness as arising in a distinct, isolated, atomistic individual - this builds on the necessary realization that we are a Part Of - Not Apart From - the world around us. This equally interesting 17 min TED Talk explores how our built environment can be architected to shape our consciousness and even manufacture consent. :)

A handful of people working at a handful of tech companies steer the thoughts of billions of people every day, says design thinker Tristan Harris. From Facebook notifications to Snapstreaks to YouTube autoplays, they're all competing for one thing: your attention. Harris shares how these companies prey on our psychology for their own profit and calls for a design renaissance in which our tech instead encourages us to live out the timeline we want.

The frontier of biology, DNA and computation is developing at an accelerating rate - the future of computation is deeply entangled with the mind-machine interface and even integration - the boundaries between technology biology are blurring.

“We demonstrate that an RNA molecule can be engineered into a programmable and logically acting “Ribocomputing Device,” said Wyss Institute Core Faculty member Peng Yin, Ph.D., who led the study and is also Professor of Systems Biology at Harvard Medical School. “This breakthrough at the interface of nanotechnology and synthetic biology will enable us to design more reliable synthetic biological circuits that are much more conscious of the influences in their environment relevant to specific goals.”

Novel RNA nano-devices in living cells can sense and analyze multiple complex signals for future synthetic diagnostics and therapeutics

Synthetic biologists are converting microbial cells into living devices that are able to perform useful tasks ranging from the production of drugs, fine chemicals and biofuels to detecting disease-causing agents and releasing therapeutic molecules inside the body. To accomplish this, they fit cells with artificial molecular machinery that can sense stimuli such as toxins in the environment, metabolite levels or inflammatory signals. Much like electronic circuits, these synthetic biological circuits can process information and make logic-guided decisions. Unlike their electronic counterparts, however, biological circuits must be fabricated from the molecular components that cells can produce, and they must operate in the crowded and ever-changing environment within each cell.

So far, synthetic biological circuits can only sense a handful of signals, giving them an incomplete picture of conditions in the host cell. They are also built out of several moving parts in the form of different types of molecules, such as DNAs, RNAs, and proteins, that must find, bind and work together to sense and process signals. Identifying molecules that cooperate well with one another is difficult and makes development of new biological circuits a time-consuming and often unpredictable process.

As reported in Nature, a team at Harvard’s Wyss Institute for Biologically Inspired Engineering is now presenting an all-in-one solution that imbues a molecule of ‘ribo’ nucleic acid or RNA with the capacity to sense multiple signals and make logical decisions to control protein production with high precision. The study’s approach resulted in a genetically encodable RNA nano-device that can perform an unprecedented 12-input logic operation to accurately regulate the expression of a fluorescent reporter protein in E. coli bacteria only when encountering a complex, user-prescribed profile of intra-cellular stimuli. Such programmable nano-devices may allow researchers to construct more sophisticated synthetic biological circuits, enabling them to analyze complex cellular environments efficiently and to respond accurately.

Here’s a very interesting signal of emerging new medical advances related to domesticating DNA.

Too often we hear of medical innovations that eventually lead nowhere. Every now and then, however, there’s good news and more importantly hope. A perfect example is the new report by the University of Miami, according to which transplanting an artificially-grown pancreas seems to have put an end to one diabetic’s reliance on insulin injections.

According to the researchers, the patient in question is a 43-year-old woman who has been suffering from Type 1 diabetes, a metabolic disease that causes pancreas to generate very little insulin, for nearly 25 years. To aid the body’s natural insulin production system, a team from University of Miami’s Diabetes Research Institute turned to biomedical engineering, using islet cells to create an artificial pancreas.

The insulin-synthesizing cells were then transplanted onto the patient’s omentum, which is basically a part of the peritoneum that joins the stomach with the surrounding abdominal organs. Omentum, the scientists explain, was chosen as site of transplantation instead of the more conventional liver to bypass any unwanted complications. Within 17 days after the surgery, the woman stopped needing insulin. Now a year later, the bioengineered cells continue to do the work of a pancreas.

This is a very interesting signal - the domestication of oceans for farming, carbon capture and fuel production - among many other things. While the article is not very well written - it is worth the read. It seems a plausible geoforming approach to climate change and population growth. Of course zero-marginal cost renewable energy could also enable industrial scale desalination and irrigation of many desert areas as well.

About 37 percent of Earth’s land area is used for agricultural land. About one-third of this area, or 11 percent of Earth’s total land, is used for crops. The balance, roughly one-fourth of Earth’s land area, is pastureland, which includes cultivated or wild forage crops for animals and open land used for grazing.

There is a proposal to use about 9% of the oceans surface for massive kelp farms. The Ocean surface area is about 36 billion hectares. This would offset all CO2 production and provide 0.5 kg of fish and sea vegetables per person per day for 10 billion people as an “incidental” by-product. Nine per cent of the world’s oceans would be equivalent to about four and a half times the area of Australia.

Giant of the kelp forest grows faster than tropical bamboo—about 10 to 12 inches in the bay and under ideal conditions, giant kelp can grow an astonishing two feet each day.

In the last decade, seaweed cultivation has been expanding rapidly thanks to growing demand for its use in pharmaceuticals, nutraceuticals and antimicrobial products, as well as biotechnological applications. Seaweed today is used in some toothpastes, skin care products and cosmetics, paints and several industrial products, including adhesives, dyes and gels. Seaweed is also used in landscaping or to combat beach erosion.

In 2016, seaweed farms produce more than 25 million metric tonnes annually. The global value of the crop, US$6.4 billion (2014), exceeds that of the world’s lemons and limes.

This is a very important signal - reaching a tipping point in 3D printing. The images and 3 min video are worth the view.

Desktop Metal – remember the name. This Massachussetts company is preparing to turn manufacturing on its head, with a 3D metal printing system that's so much faster, safer and cheaper than existing systems that it's going to compete with traditional mass manufacturing processes.

We've been hearing for years now about 3D printing and how it's going to revolutionize manufacturing. As yet, though, it's still on the periphery.

But a very exciting company out of Massachusetts, headed by some of the guys who came up with the idea of additive manufacture in the first place, believes it's got the technology and the machinery to boost 3D printing into the big time, for real.

This is something we’ll see soon in medical kits and emergency and field kits. The innovation arising from biomimicry.

Anyone who has ever tried to put on a Band-Aid® when their skin is damp knows that it can be frustrating. Wet skin isn’t the only challenge for medical adhesives – the human body is full of blood, serum, and other fluids that complicate the repair of numerous internal injuries. Many of the adhesive products used today are toxic to cells, inflexible when they dry, and do not bind strongly to biological tissue. A team of researchers from the Wyss Institute for Biologically Inspired Engineering and the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) has created a super-strong “tough adhesive” that is biocompatible and binds to tissues with a strength comparable to the body’s own resilient cartilage, even when they’re wet. “The key feature of our material is the combination of a very strong adhesive force and the ability to transfer and dissipate stress, which have historically not been integrated into a single adhesive,” says corresponding author Dave Mooney, Ph.D., who is a founding Core Faculty member at the Wyss Institute and the Robert P. Pinkas Family Professor of Bioengineering at SEAS. The research is reported in this week’s issue of Science.

This is a very interesting signal about the change represented by the digital environment. Television may be going the way of print.

After years of insisting it wasn’t so, the TV Industrial Complex now admits that it’s contracting: The number of people paying for TV has been declining for several years.

But that’s not the only part of the TV world that’s shrinking: Actual TV sets are disappearing from homes, too.

After years of steady increases, the number of TVs in homes shrank to an average of 2.3 in 2015, down from an average of 2.6 televisions per household in 2009, according to the latest available data from the Energy Information Administration.

The best-case scenario for that, put forward by the people who sell TV programming for a living, is that Americans are watching TV on devices that aren’t TVs, like laptops, tablets and phones. The flip side of that argument: You can do lots of other things on those devices, which creates even more competition for TV viewing time.

Here a great signal of the change in conditions of change regarding whole occupational frameworks. While the article is about coal-based energy plants something very similar will emerge as transportation become electricity based.

As coal-fired electric power plants close across the U.S., they take with them coal mining jobs, to be sure. And while those job losses have generated considerable political heat, a no-less important employment shift is under way within power plants themselves.

Gone are many of the mechanics, millwrights, and welders who once held high paying jobs to keep coal-fired power plants operating.

As maintenance-intensive coal-fired power plants—chock full of rotating equipment and leak-prone pipes and valves, not to mention conveyer belts and coal ash handling equipment—are retired they are being replaced to a large extent by gas-fired units that make full use of sensors, predictive maintenance software, and automated control systems.

As a result, the extensive use of analytics and automation within natural gas-fired power plants means that staffing levels can be cut to a fraction of what they were a decade ago.

Recent announcements confirm the trend - such as …

On August 1, Michigan-based DTE Energy revealed plans to spend almost $1 billion to build a 1,100-megawatt gas-fired power plant. When the station enters service in 2022, it will replace three existing coal-fired units that currently employ more than 500 people. Job openings at the new gas-fired plant? Thirty-five full-time employees, says a DTE spokesperson.

This is another weak but definite signal of the future of human-computer-system transformation. I first heard of this as a concept in a Sci-Fi book by Nicola Griffith called ‘Slow River’ written in 1995.

Olivia Solon felt more key fob than RoboCop after getting implanted with a microchip to make contactless purchases. But the future could hold much more

It took two deep breaths, then a tattooed piercer called Andy stabbed me in the fleshy part of my hand between the forefinger and thumb, injecting a tiny microchip encased in a glass capsule the size of a large grain of rice. And so I became the world’s lamest cyborg.

The radio-frequency identification (RFID) chip, once registered, allows me to open doors, unlock computers and pay for items – provided those systems use the right software and have dedicated contactless chip readers.

For now, that means that I can buy a KitKat from a vending machine in the canteen of a company called Three Square Market, based on the outskirts of River Falls, Wisconsin. The company, which provides self-service “micro-markets” to businesses around the world, became the first in the US to offer these implants to all of its employees and a handful of journalists at a “chip party” this week.

The idea came earlier this year when the company’s vice-president of international development, Tony Danna, visited a co-working space called Epicenter in Sweden, which has been chipping staff since 2015.

This is a great signal - on a number of dimensions - as an emergent use of the Blockchain, as a signal of distributed infrastructure and innovation regarding business models. Mostly it indicates the growing power of the digital environment as the platform of the coordination economy arising through near costless coordination.

A residential electric-car charger spends most of its time just hanging around unused.

That underutilization looked like a opportunity to Val Miftakhov, CEO of the smart charger startup eMotorWerks. On Tuesday, the company launched a beta test of a distributed, peer-to-peer charging marketplace in California that lets drivers pay each other for use of their home chargers.

If successful, this concept could drastically expand the population of readily available EV chargers, at least in places with a high density of home charging stations. That reduces range anxiety, promoting more EV ownership and potentially generating a virtuous cycle.

For a charger company like eMotorWerks, this is part of a broader strategy to move from selling hardware alone to offering software that generates value beyond the initial purchase.

"The value of this flexibility will surpass any type of hardware margin we can generate in this market," Miftakhov said.

The peer-to-peer concept relies on blockchain, one of today's hottest trends in energy, to verify the transactions without a central regulator.