Friday, November 7, 2014

Friday Thinking 7 November 2014

Hello
all – Friday Thinking is curated in the spirit of sharing. Many thanks to those who enjoy this.J

….“a future bio designer should be able to code the properties of a living system…by describing the desired features in a biological programming language.” That programming language could be DNA, properly understood; but a better analogy might be to see DNA as the machine language — the 1s and 0s, of biology. While the pioneers of computing dealt directly with 1s and 0s, we now describe a program’s “desired features” in high-level languages like Python; programming in binary only happens in a few special circumstances.

But there’s a key assumption behind this conservatism that deserves to be made explicit and examined. The assumption is that the status quo is good, that stability and equilibrium are good.

Making this assumption explicit helps me to understand why resilience has become such a hot topic in the business world as well as public policy circles. Executives feel like they are under attack. It’s not just about natural disasters or terrorist attacks. It’s much more pervasive and widespread, appearing on a daily basis in thousands of unexpected forms. In this kind of world, resilience resonates. Help me to get back to where I was.

And, in the context of enterprise resilience, the perception of being under attack is very real. ….corporate performance has been deteriorating for decades, “topple rates” are on the rise, life spans of companies are rapidly diminishing and volatility is increasing. It’s no wonder that there is intense longing for a “bounce back”.

It helps to make sense of the booming growth on a global scale of the “resilience industry” – books, conferences and experts all willing and able to help executives and their institutions to bounce back.Entrenched interests are desperate for reassurance that they can preserve what they have and will not be vulnerable to unexpected disruptions.

This focus on enterprise resiliency, while understandable, is much too narrow and ultimately dysfunctional and self-defeating. Focusing on enterprise resiliency as the ability to “bounce back” reflects the short-termism that consumes most executives today. It loses sight of the fact that many of these short-term “attacks” are part of a much more profound phase shift at the market/ecosystem level.

The technium is the sphere of visible technology and intangible organizations that form what we think of as modern culture. It is the current accumulation of all that humans have created. For the last 1,000 years, this techosphere has grown about 1.5% per year. It marks the difference between our lives now, verses 10,000 years ago. Our society is as dependent on this technological system as nature itself. Yet, like all systems it has its own agenda. Like all organisms the technium also wants.

It appears that the “generation naming” sweepstakes have started up again. As the bloom is fading from the Millennial (nee Generation Y) rose, marketers and social commentators are turning their eyes on the next sweet young thing: that cohort of people born somewhere around the turn of the Millennium. What is interesting to me is how very silly most of these conversations tend to be — making the consistent human mistake of linear projection. Just like 50's era futurists imagined a world of flying cars, there is a consistent mistake of assuming that the next generation will be some next version of Millennials. This couldn’t be farther from the truth. So, to throw my hat into the ring, I will call them the “Omega Generation” because these kids will be in many profoundly important ways, the last generation.

….Generation Omega, then, would be that cohort of people who do not remember anything before September 11, 2001. These are kids who simply have no deep reference to what life was like before we decided as a culture to fully immerse ourselves in fear. Equally, of course, these are kids who have absolutely no recollection of the time before Google and Wikipedia, when the right answer was not simply a keystroke away. Interestingly, some of them will have vague recollections of life before smart phones, financial crises, gay marriage, and Minecraft; but these and many other cultural dynamics of the past decade and a half combine to form the general “adaptive landscape” that has given rise to their unique, shared generational sensibilities.

Broadly speaking, we can suggest a number of characteristics that might be part of the generational flavor. For example, having been weaned in a highly interactive and responsive environment (think iTunes, YouTube and Minecraft), this is likely to be a generation of intuitive agency. They expect significant influence over and responsibility for their world. For example, unlike previous generations for whom media was an act of passive consumption (whatever is on NBC at 8 is what you are going to watch), their most fundamental assumption is the inverse: not only can you choose, but you must choose from a nearly unlimited selection. And the notion of being an active participant in “remix culture”? Millennials were the early adopters. For Generation Omega this is simply the water…..

….this is a generation for whom “to be networked” is an unconscious assumption. They are native collaborators and bricoleurs — assembling what they need from a cloud of people and materials “out there” on the network; and presenting it back without thinking twice. In a strong sense, precisely because it has been with them as long as walking and talking, they perceive the network as an extension of themselves. If Millennials are “digital natives”, Generation Omega is “network native”.

And here’s a 3 minute video that shows some of the possible environment that the omega generation will take for granted. When everything becomes linked with everything matter becomes mind….

“We humans have indeed always been adept at dovetailing our minds and skills to the shape of our current tools and aids. But when those tools and aids start dovetailing back — when our technologies actively, automatically, and continually tailor themselves to us, just as we do to them — then the line between tool and user becomes flimsy indeed.” - Andy Clark

Here’s a great alert for anyone interested in the next really big thing - or another take on the Internet of Things.

On Wednesday, November 12th, Kevin Kelly, a founding Board member of Long Now, will speak on “Technium Unbound,” as part of our monthly Seminars About Long-term Thinking. Each month the Seminar Primer gives you some background about the speaker, including links to learn even more.

“Instead of going to university, I went to Asia. That was one of the best decisions I ever made,” says Kevin Kelly about following his instincts into the Big Here in the early 1970s.

For someone who is probably best known as a technology pundit, it may be surprising to learn that his formative years were spent traveling in areas where his 35mm camera was often the most advanced technology for miles. But Kevin’s work has always been about cultures as well as technologies.

In 1992 Kevin joined Wired magazine prior to its launch and became its Executive Editor for the first 7 years of its existence. Wired won two National Magazine Awards during his tenure. He is still on staff at Wired as “Senior Maverick” and writes a few times a year for the magazine.

These two articles by Susan Crawford discuss the future of the Internet and the business models that incumbents would like to see propagated. These issues are important to us all.

The proof is in: Detailed report shows how U.S. Internet access monopolies punish rivals and catch innocent bystanders in the crossfire—legally.

Devan Dewey, the Chief Technology Officer of midsize investment consultancy NEPC, has an orderly office and a highly organized mind. So naturally, when some at-home employees near Boston complained they could barely work because their connections to the company data center had slowed to a crawl, Dewey and his team determined to find out why.

His team’s research led him to suspect something astonishing and dark: that NEPC, and probably many other businesses and consumers, were caught in the crossfire of an ongoing battle between “eyeball networks” run by Internet access providers, such as Comcast and Verizon; and “transit networks” used by competing video services, such as Netflix. He came to wonder whether, in their attempts to charge Netflix for access to their subscribers, Comcast and some other networks were recklessly affecting Internet connectivity for businesses like NEPC. Could that possibly be true?

The answer is yes. What started out as suspicion is now fully documented, in a study that has just been released by a nonprofit research consortium called M-Lab. M-Lab’s data suggests the logical conclusion that Verizon and Comcast, as well as Time Warner Cable, CenturyLink, and AT&T, are intentionally squeezing data coming from some incoming networks — in particular, networks associated with Netflix, which competes with these companies in video entertainment. Customers of these eyeball networks are getting degraded service that cannot be explained by anything other than business decisions. And these eyeball networks are acting with an apparent disregard for users not affiliated with Netflix, affecting all kinds of traffic and all kinds of users. By tacitly allowing network traffic jams — affecting only the highways of fiber that Netflix was using to send its bits — everyone else using those routes was getting stuck. NEPC employees working from home, for instance, could barely operate.

The interconnection battle has been framed as one between Comcast, Verizon, AT&T, and others on one side and Netflix on the other. A better way to think of it might be to put the eyeball networks on one side and the future on the other. What about the next Netflix (or any other business that Comcast, Verizon, Time Warner Cable, AT&T or CenturyLink view as competitive with their own)? What about all the other businesses that will be affected when Comcast finds the next network connection it wants to squeeze?

The problem is novel and complicated. But the M-Lab data is unassailable: this problem is harming consumers. Internet connectivity has been damaged for millions of users, for months at a time, with no consequences for the actors that caused the problem. If the FCC ever needed clear evidence of consumer and business harm, here it is.

Speaking of the Internet and gaming - here’s another reason for a new commons-based approach to our Internet infrastructure. The full PEW Research Report is available as a pdf download.

Cell phones and social media platforms like Facebook and Twitter are playing an increasingly prominent role in how voters get political information and follow election news, according to a new national survey by the Pew Research Center.

Modern technology has given the powerful new abilities to eavesdrop and collect data on innocent people. Surveillance Self-Defense is EFF's guide to defending yourself and your friends from surveillance by using secure technology and developing careful practices.

Select an article from our index to learn about a tool or issue, or check out one of our playlists to take a guided tour through a new set of skills.

Speaking of self-defence - here’s something all of us have seen in the movies and seems to be becoming an ever more popular meme.

Computers will soon become more intelligent than us. Some of the best brains in Silicon Valley are now trying to work out what happens next

The scene in the cramped ofﬁce in Berkeley on a recent Saturday feels like a typical start-up carried along by the tech boom, with engineers working through the weekend in a race against time. The long whiteboard down one wall has been scrawled over in different-coloured pens. A large jar of candy and a glass-doored fridge full of soda sit by the entrance.

Nate Soares, a former Google engineer, is sitting on the edge of a sofa weighing up the chances of success for the project he is working on. He puts them at only about 5 per cent. But the odds he is calculating aren’t for some new smartphone app. Instead, Soares is talking about something much more arresting: whether programmers like him will be able to save mankind from extinction at the hands of its own most powerful creation.

The object of concern – both for him and the Machine Intelligence Research Institute (Miri), whose ofﬁces these are – is artiﬁcial intelligence. Super-smart machines with malicious intent are a staple of science ﬁction, from the soft-spoken Hal 9000 to the scarily violent Skynet. But the AI that people like Soares believe is coming mankind’s way, very probably before the end of this century, would be much worse.

This article has a fantastic list of companies that engage in perpetual experimentation in order to continue to evolve - the list is a must read. This list also indicates the need or organizations to move toward a different research paradigm, one that is more oriented to proactive and responsive experimentation - than the approach to what research traditional research has been - especially in the light of increasing data availability.

Innovation has become the holy grail. Finding innovation is almost a sacred quest for the solution that will create growth, and open new eras of prosperity and well-being.

Unfortunately, like many things called holy, the concept of innovation is invoked ritually and ceremonially more than it is embraced in practice.

For all the talk about innovation, I see many leaders in numerous organizations in every sector who actively stifle it. They say they want more innovation. But at the same time, they seem to operate by a set of hidden principles designed to prevent innovations from surfacing or succeeding. I’ve compiled them into a set of anti-rules. Acting in these nine ways guarantees that there will be little or no innovation of any significance, because no one had the time, money, or motivation to innovate:

Speaking about the evolutionary edge - this is a very interesting and short article. There is also a short video with some original footage 6 min - worth the view.

In 1995 ABC Science reported on the work of a "skunkworks" team that had conceived of a device which they thought would change the future of newspapers. Most of that visionary work is now reality.

The Knight-Ridder company was once the USA's second largest publisher of newspapers. The company had the foresight to establish a small team to explore how the digital revolution might change the way future consumers would access news and other information, and how newspapers could adapt in order to retain their audiences.

The work of this team, led by Roger Fidler, proved impressively prescient. It conceived of and described an entirely new product, a tablet computer weighing about a kilogram with a colour touch screen and resolution as good as paper — and identified how consumers would use such a device not merely as a tool to access news and information, but also to communicate and wirelessly purchase goods and advertised products.

The "future" consumer device they envisaged in 1995 looks awfully similar to the device we now know as the iPad — which was released 15 years later. The similarity is so striking that in the Samsung-Apple lawsuit, Samsung lawyers presented the Knight-Ridder research as "prior art", arguing that many of Apple's patents were not of its own original invention.

The jury did not agree. What do you think?

Postscript: Shortly after this video was made, Knight-Ridder closed Roger Fidler's research project. A changed corporate leadership saw no urgency in pursuing the concept of the tablet. After all the newspaper business was still very profitable, and Roger's tablet was considered to be so far into the future as to be science fiction.

Today, Roger Fidler owns an iPad and the Knight-Ridder company is no more.

And talking about both evolution and the future of publication - this is a very interesting point of view - that should be considered.

Here's a little real talk about the book publishing industry — it adds almost no value, it is going to be wiped off the face of the earth soon, and writers and readers will be better off for it.

The fundamental uselessness of book publishers is why I thought it was dumb of the Department of Justice to even bother prosecuting them for their flagrantly illegal cartel behavior a couple of years back, and it's why I'm deaf to the argument that Amazon's ongoing efforts to crush Hachette are evidence of a public policy problem that needs remedy. Franklin Foer's recent efforts to label Amazon a monopolist are unconvincing, and Paul Krugman's narrower argument that they have some form of monopsony power in the book industry is equally wrongheaded.

What is indisputably true is that Amazon is on track to destroy the businesses of incumbent book publishers. But the many authors and intellectuals who've been convinced that their interests — or the interests of literary culture writ large — are identical with those of the publishers are simply mistaken.

Books are published by giant conglomerates

...the book publishing industry is not a cuddly craft affair. It's dominated by a Big Four of publishers, who are themselves subsidiaries of much larger conglomerates. Simon & Schuster is owned by CBS, HarperCollins is owned by NewsCorp, Penguin and RandomHouse are jointly owned by Pearson and Bertelsmann, and Hachette is part of an enormous French company called Lagadère.

Here is a wonderful example of digital media with the digital rather than the printing press as content of the a document.

As promised, this month's infographic is packed with actual science. I decided to illustrate how different animals breathe, and I picked three species that I thought were particularly awesome. The topic really lends itself to a short looped GIF so that was an added plus.

In other news, I'm getting my new computer this week! It's going to be awesome working on something that can have more than one heavy-duty application running at once. And to make things even better, it's almost Halloween. Have an awesome weekend guys :)

This is a wonderful TED talk by one of the world’s greatest foresight practitioners - and who is in fact a Canadian. A must see, 16 min.

Adam is an international consultant, facilitating multi-stakeholders and complex negotations. Best selling author of "Solving tough problems" and "Power and Love", he shares his lesson learnt about the need of balancing the two forces of Power and Love.

Linking the human nervous system to computers is providing unprecedented control of artificial limbs and restoring lost sensory function.

Neuroprosthetic research began long before it solidified as an organized academic field of study. In 1973, University of California, Los Angeles, computer scientist Jacques Vidal observed modulations of signals in the electroencephalogram of a patient and wrote in Annual Review of Biophysics and Bioengineering: “Can these observable electrical brain signals be put to work as carriers of information in man-computer communication or for the purpose of controlling such external apparatus as prosthetic devices or spaceships?” While we don’t yet have mind-controlled spaceships, neural control of a prosthetic device for medical applications is now becoming commonplace in labs around the world.

In its simplest form, a neuroprosthetic is a device that supplants or supplements the input and/or output of the nervous system. For decades, researchers have eyed neuroprosthetics as ways to bypass neural deficits caused by disease, or even to augment existing function for improved performance. Today, several different types of surgical brain implants are being tested for their ability to restore some level of function in patients with severe sensory or motor disabilities. In a very different vein, a company called Foc.us recently started selling simple, noninvasive brain stimulators to improve normal people’s attention while gaming. And perhaps the most visible recent demonstration of the power of neuroprosthetics was a spinal cord–injured patient using a brain-controlled exoskeleton to kick off the 2014 World Cup in Brazil. In short, tinkering with the brain has begun in earnest.

Speaking of new types of prosthetics - here’s something to watch - maybe we are closer to the tri-corder than we think.

Will ultrasound-on-a-chip make medical imaging so cheap that anyone can do it?

A scanner the size of an iPhone that you could hold up to a person’s chest and see a vivid, moving, 3-D image of what’s inside is being developed by entrepreneur Jonathan Rothberg.

Rothberg says he has raised $100 million to create a medical imaging device that’s nearly “as cheap as a stethoscope” and will “make doctors 100 times as effective.” The technology, which according to patent documents relies on a new kind of ultrasound chip, could eventually lead to new ways to destroy cancer cells with heat, or deliver information to brain cells.

Rothberg has a knack for marrying semiconductor technology to problems in biology. He started and sold two DNA-sequencing companies, 454 and Ion Torrent Systems (see “The $2 Million Genome” and “A Semiconductor DNA Sequencer”), for more than $500 million. The profits have allowed Rothberg, who showed up for an interview wearing worn chinos and a tattered sailor’s belt, to ply the ocean on a 130-foot yacht named Gene Machine and to indulge high-concept hobbies like sequencing the DNA of mathematical geniuses.

The imaging system is being developed by Butterfly Network, a three-year old company that is the furthest advanced of several ventures that Rothberg says will be coming out of 4Combinator, an incubator he has created to start and finance companies that combine medical sensors with a branch of artificial-intelligence science called deep learning.

Jeremy Rifkin has made the case the solar and wind energy are on a ‘Moore’s Law’ type of trajectory.

Renewable energy shows an average price decline over the last 5 years of 78% for utility scale solar and 58% for wind.

Lazard, an asset management firm, has a fascinating new analysis of renewable and other energy prices out. There are a huge number of insights in this, from an outside analyst whose primary interest is financial.

First, the plunge in renewable prices continues, and over the last 5 years, wind has resumed its plunge as well. Their numbers show an average price decline over the last 5 years of 78% for utility scale solar and 58% for wind. These numbers are unsubsidized, without investment tax credit.

Second, unsubsidized prices are cost competitive with grid wholesale prices. Solar, which delivers power during the daytime and afternoon, heavily overlapping with the late afternoon and early evening peak, is well below the wholesale price of peak power (provided by ‘peaker’ natural gas plants that only operate during those few hours of the day). Solar is even closing in on the wholesale cost of 24/7 operated coal and natural gas plants that provide ‘baseload’ power overnight (and as the underlying power throughout the day.)

It’s all about storage now. (Or soon, at any rate.) Inside of a decade, in most of the US and most of the world, solar or wind will be cheaper than coal or natural gas on an instantaneous, non-stored basis. This trend appears inexorable. And so long as there is demand for more energy at the hours at which solar and wind are delivering (which is the case right now), then the situation is great.

By the time we reach 20% grid penetration of renewables, we seem on path to have storage costs down to roughly 1/10th of their current level. That’s a price at which a mix of solar, wind, and storage could out price even current ‘baseload’ power in large fractions of the country and the world.

And speaking of a drop in solar & wind energy - here’s something Jeremy Rifkin has been pointing out.

The industrial engine of Europe is increasingly powered by backyard windmills and locally owned solar panels. And this complex, patchwork system just might be the future of sustainable energy.

On any given day, Johannes van Bergen, director of the municipal utility Stadtwerke Schwäbisch Hall in southwestern Germany, conducts his team's array of gas, heat, and electricity sources to meet the energy needs of at least several hundred thousand Swabians in the region, as well as about more than 90,000 customers elsewhere in Germany. And every day -- in fact, every hour -- that energy mix is constantly in flux.

Technicians at the town's smart-grid center monitor and manage the utility's roughly 3,000 regional energy suppliers: several thousand solar photovoltaic (PV) installations, two wind parks, one gas-and-steam power station, six small hydro-electric works, three biomass (wood pellet), six biogas plants, and 48 combined heat and power plants, as well as other conventional and renewable energy suppliers outside the municipality.

The population that this ballet of coordinated energy sources serves is admittedly modest, but it's here that the future of Germany's energy industry is being tested in full -- and proven.

In a world of growing uncertainty and mounting performance pressure, it’s understandable that resilience has become a very hot topic. Everyone is talking about it and writing about it. We all seem to want to develop more resilience. But I’m going to take a contrarian position and suggest that resilience, at least as conventionally defined, is a distraction and perhaps even dangerous.

How can I say that? The view crystallized as I sat through a two day gathering several months ago on the theme of resilience. I was intrigued to go deeper into the topic because I had heard so much about it and these were experts from a broad range of disciplines. But the more I heard, the more distressed I became.

What does resilience mean?

Resilience is used very loosely as a term, so there are many different definitions. But across all the talks given in that conference (and much of the literature I have read outside the conference) there is one common theme that can be reduced to a simple phrase: it is the ability to “bounce back” in the face of unexpected shocks. In engineering, it is the ability of a material or structure to resume its original size and shape after being deformed. In systems science, it is the ability to return to equilibrium, steady state or original function after a shock to the system. In social analysis, it is the capability of a social group to absorb disturbance and reorganize to retain essentially the same function structure and identity.

Here’s an interesting piece calling for a ‘commons-based’ approach to our genetic common wealth. It’s time for a biological commons

We’re at the start of a revolution in biology, so let’s avoid the tragedy of the anticommons.

A few months ago, I singled out an article in BioCoder about the appearance of open source biology. In his white paper for the Bio-Commons, Rüdiger Trojok writes about a significantly more ambitious vision for open biology: a bio-commons that holds biological intellectual property in trust for the good of all. He also articulates the tragedy of the anticommons, the nightmarish opposite of a bio-commons in which progress is difficult or impossible because “ambiguous and competing intellectual property claims…deter sharing and weaken investment incentives.” Each individual piece of intellectual property is carefully groomed and preserved, but it’s impossible to combine the elements; it’s like a jigsaw puzzle, in which every piece is locked in a separate safe.

We’ve certainly seen the anticommons in computing. Patent trolls are a significant disincentive to innovation; regardless of how weak the patent claim may be, most start-ups just don’t have the money to defend. Could biotechnology head in this direction, too? In the U.S., the Supreme Court has ruled that human genes cannot be patented. But that ruling doesn’t apply to genes from other organisms, and arguably doesn’t apply to modifications of human genes. (I don’t know the status of genetic patents in other countries.) The patentability of biological “inventions” has the potential to make it more difficult to do cutting-edge research in areas like synthetic biology and pharmaceuticals (Trojok points specifically to antibiotics, where research is particularly stagnant).

The free-software and open source movements have done a lot to enable innovation in computing. We have a rich “commons” of software (Linux, Apache, MySQL, Hadoop, to say nothing of the many tools from the GNU project). This software commons forms the technological basis for just about every technology company in existence today, including Facebook, Google, Apple, and even Microsoft. Can the same ideas be equally productive for biology?

Speaking about the genetic commons - here’s something we have to think about. One important dimension is the need to distinguish immoral business models appropriating genetic capability versus human capacity to develop better food to grow in wider ranges of environments - new organics to produce - medicine, fuel, energy, materials.

With its world-leading research investments and vast size, China will dominate the future of genetically modified food—despite the resistance of its population.

It is a hot, smoggy July weekend in Beijing, and the gates to the Forbidden City are thronged with tens of thousands of sweat-drenched tourists. Few make the trek to the city’s east side and its more tranquil China Agricultural Museum, where several formal buildings are set amid sparkling ponds ringed by lotus plants in full pink bloom. The site, which is attached to the Ministry of Agriculture, promises that it will “acquaint visitors with the brilliant agricultural history of China”—but what’s missing from the official presentation is as telling as what’s on display.

At least 9,000 years ago, people living in China were the first to cultivate rice, developing elaborate irrigation systems. Today, rice is the nation’s (and half the world’s) most important crop. Some 2,500 years ago, the Chinese also invented the first really efficient iron ploughshares, called kuan, with a curved V shape that efficiently turned hard soil. These millennia-old innovations are matched by those of the past century. A display honors Yuan Longping, China’s revered “father of hybrid rice,” who in the mid-1960s posited that if he could find male-sterile rice plants—ones unable to self-pollinate—he could create hybrid strains reliably and at large scale. (In general, hybrids are more vigorous and higher-yielding than the parent varieties.) He later found such plants and, together with other researchers, created a process to make high-yielding hybrids year after year, revolutionizing rice production.

But the exhibits don’t mention the vast suffering wrought by Chinese agricultural failure. Yuan himself lived through Chairman Mao Zedong’s “Great Leap Forward” of 1958–1961, which triggered a collapse in food production and distribution by banning private farming in favor of vast collective farms. As many as 45 million people died, most by starvation. The museum also says nothing about the most fought-over product of modern-­day agricultural technology: genetically modified organisms, or GMOs. Yes, there’s a 1990s-era gene gun, which used high-­pressure gas to blast DNA-coated particles into plant cells to create early transgenic crops. And there’s a stalk representing the big GMO success story that used this approach: Bt cotton, a pest-resistant variety that has been planted widely in China for 15 years, greatly increasing production while slashing pesticide use. (The plant, which incorporates DNA from a soil bacterium that’s harmful to insects, makes up 90 percent of the cotton crop and by one estimate produces a $1 billion annual economic gain for farmers.) But the story seems to end more than a decade ago.

For Fun

Here’s a small glimpse at what can be accomplished with ‘Minecraft’ - an independently developed game - 4 min video.

The Pandora's Blocks creative network brings you our Head Into The Clouds PMC contest entry to the benefit of your viewing pleasure. Downloads and other notes will be available on our Planetminecraft! If anyone is interested in the outcomes of my homage to the Great Pumpkin - they can be viewed here: