Thursday, October 1, 2015

Friday Thinking 2 October 2015

Hello – Friday Thinking is curated on the basis of my own curiosity and offered in the spirit of sharing. Many thanks to those who enjoy this. J In the 21st Century curiosity will SKILL the cat.

Technology allows us to re-imagine work.

What if the kernel of on-demand work is not short-term associations and spot market exchanges, but in allowing us to create a new understanding of work: contextual interaction based on collaborative creativity and human capital. The relations between workers become the central, and in many ways, defining feature of the firm.

A firm, then, is not a bundle of assets belonging to owners, but a bundle of dynamic commitments between people. The organization becomes a process of ongoing organizing.

The future of human-centric work can be built on relations and complementarity: the human capital of a worker is then worth more when applied together with the human capital of other members of the community. In industrial processes your value could easily be less than what you are. In contextual, post-industrial settings your value can be more than what you are.

You work more from your relations than your skills.

The productivity of an individual depends not just on being part of a community but being part of a particular community engaged in particular commitments. The context matters most.

“What we were trying to do with this workshop was create a sustainable community. Among the shared values of that community is the idea that the real benefit of this data is best achieved by making it public and free.”

This is the breathtaking, crucial, central element of the Array of Things mindset: the whole thing is aimed at creating a repository of public, free, real-time data in a single place. All the devices are set to communicate only with the Chicago researchers. No outsider can initiate a connection to one of these nodes. When the device connects to the central location in Chicago — at Argonne National Lab — it drops its data there, receives an acknowledgement that the data was received, and then deletes the data from its own local storage place. The City of Chicago, in turn, will take a feed of data from Argonne and pull it into its own open data portal.

Presto: data is available to the city it came from, any other city can compare its data to the Chicago-stored data, and everyone’s learning — without having to pay a company for an elaborate proprietary system. (Carnegie-Mellon and Georgia Tech are already testing sensor devices, and the Chicago workshop also focused on ensuring that data from those sensors is available in Chicago in standardized form — accompanied by latitude and longitude information, for example, and using a consistent form of time-stamping.)

The amount of data sitting in the systems of organizations is vast, and in many cases untapped. The issue is, “If only we knew what we know.” How you handle that information makes the difference between an efficient company and one that is flailing.

In the next few years, the application of Big Data generated by corporate real-estate teams will allow for highly tailored examinations of space. To track office usage, there are simple tools such as swipe cards that inform a company of when staff come and go.

But the world of Big Data is moving far beyond that. For instance, sensing technology can tell us whether existing desk space is being used or not. Many companies are discovering that, at any one time, at least 25% of their workspace is not being used.

… political and social revolution is preceded by the emergence, within the old system, of the new productive system and its value logic. Not the other way around, as the socialist and marxist tradition has claimed. Today, in the very womb of capitalism, the new mode of production, the new way of value creation and distribution, is already emerging and growing, but under the domination of the old system still, but, as its logic is fundamentally different of the logic of capital, it cannot possibly be subsumed forever, and prepares the ground for a structural transformation. This structural transformation, or 'phase transition', will make the emergent subsystem into the new dominant logic. Today, the economy based on common knowledge pools is already estimated at 1/6th of GDP in the US (17 million workers). Netarchical capitalism (the hierarchy of the network, hence 'net'-'archical'), the forces of capital that are funding and enabling the transition towards the collaborative commons, though under their own conditions, are a increasingly strong sector of the economy, but their very parasital mode of operation (i.e. expropriation of nearly 100% of the value created by human cooperation), makes it impossible for them to be the next ruling class. A capitalism that doesn't pay its value creators simply cannot exist in the long term as a stable system. This is why Jeremy Rifkin is entirely correct in his prediction for the future.

So what is the existing commons economy? It's the economy of commons-oriented peer production, first described by Yochai Benkler in The Wealth of Networks. It consists of productive communities of contributors, paid or unpaid, who are contributing, not to privatized knowledge, but to common pools of knowledge, code and design, which fuels a new commons-oriented economy. It's the economy of open knowledge, free software, open design and open hardware, more and more connected to practices of open and distributed manufacturing. It's the economy fueled by the exodus from waged labor, into a freelance economy of young urban knowledge workers, who live from the market economy, but produce more and more for open knowledge pools.

Beyond Jeremy Rifkin: How Will the Phase Transition to a Commons Economy Actually Occur?

The highest leadership is about “why”, not just “how” or “what”. So what is real leadership? Is it merely making new products, services…launching a startup…rising through the ranks…attaining a title? Nope. Let’s differentiate between levels of leadership. At the lowest is what you might call technical leadership. Setting standards, innovating, that kind of thing. Think of it as the “how”. Then there’s organizational leadership. Managing people to make things, setting objectives, defining payoffs. Think of it was the “what”. And then there is moral leadership. Moral leadership answers “why”. It is concerned with the truly Big Questions. Why are we here? What’s the point? And so it provides the Big Answers: purpose, meaning, a sense of significance. And it is only in those Big Answers that we find a way home: to ourselves. The people who we were meant to be.

How has the BBC responded to the government’s Green Paper on its future? First it abandoned the old Reithian assurance that it knew best by claiming that the people ‘owned’ it. Then, early this month it made a bolder move. Its director, Tony Hall, declares that he wants it to become at least in part an Open BBC. It will open up its platform and networks to forge alliances with other public service providers; not to centralise and monopolise but to release the public value in the experimental energy of our digital times.

This is a far-sighted proposal for the BBC and sets an example for Labour. There has to be a different foundation for public service broadcasting than relying upon those who ‘know best’, just as the Labour party can no longer rely on leaders who are ‘in touch with the future’, as Blair would say. Provided the Cameron government does not confine it to legacy broadcasting, the change to an Open BBC proposed by Hall could be profoundly significant. It will need underpinning by ensuring people do indeed own the BBC, say by turning the Trust into a mutual. But at the heart of what seems a simple idea is an essential democratic response to the deep transformation of the British state

We live in an age of simplistic explanations. We build simple systemic models to guide us. As a result, both our sense making and our actions are built on an inadequate appreciation of the complex systems we are part of.

The principles of simplification still apply to the social systems of work: most of our firms can be described as mono-cultures. We also do our best to productize humans to fit the job markets. Many organizations are productive in the short-term, but fragile in the long-term. As long as the environment remains the same, simplified systems are very efficient, but they immediately become counterproductive when the environment changes even slightly. And it always will.

Work is situational - Why simplicity may not always be the ultimate sophistication

This is an interesting take on Jeremy Rifkin’s recent book “Zero Marginal Cost Society” which should be a must read. Perhaps I like this because it right notes that we are heading toward a phase transition - something that I’ve been noting for at least 5 years now.

Beyond Jeremy Rifkin: How Will the Phase Transition to a Commons Economy Actually Occur?

In his new book, Jeremy Rifkin focuses on the value crisis of contemporary capitalism based on the revolution in marginal costs which destroys the profit rate. He concludes that this will mean that the economy and society will re-orient itself around collaborative commons, with a more peripheric role for the market dynamics. In this, Jeremy Rifkin joins the founding charter of the P2P Foundation, which was precisely created in 2005 to observe, study and promote this transition.

Past historical phase transitions, say the transition from the Roman Empire slave-based system to feudal serfdom, or the transition of feudalism to capitalism, where not exactly smooth affairs, so it may be un-realistic to expect a smooth and unproblematic phase transition towards a post-capitalist social order.

To get a better understanding of how this transition could occur, we can do two things. First, we can look at past transitions, such as transition to feudalism, and ask ourselves what this means for the current one; second, we can look at the micro-economy of the already existing commons economy, and perhaps deduce from this the future outlines of the social order to come. Follow me in these two explorations.

This is an important development - in one sense accelerating the spread - but more worrisome is the development of a ‘private’ genetic database - rather than a public commons where many more can benefit.

A genomic entrepreneur plans to sell genetic workups for as little as $250. But $25,000 gets you “a physical on steroids.”

Fifteen years ago, scientific instigator J. Craig Venter spent $100 million to race the government and sequence a human genome, which turned out to be his own. Now, with a South African health insurer, the entrepreneur says he will sequence the medically important genes of its clients for just $250.

Human Longevity Inc. (HLI), the startup Venter launched in La Jolla, California, 18 months ago, now operates what’s touted as the world’s largest DNA-sequencing lab. It aims to tackle one million genomes inside of four years, in order to create a giant private database of DNA and medical records.

In a step toward building the data trove, Venter’s company says it has formed an agreement with the South African insurer Discovery to partially decode the genomes of its customers, returning the information as part of detailed health reports.

The deal is a salvo in the widening battle to try to bring DNA data to consumers through novel avenues and by subsidizing the cost of sequencing. It appears to be the first major deal with an insurer to offer wide access to genetic information on a commercial basis.

Jonathan Broomberg, chief executive of Discovery Health, which insures four million people in South Africa and the United Kingdom, says the genome service will be made available as part of a wellness program and that Discovery will pay half the $250, with individual clients covering the rest. Gene data would be returned to doctors or genetic counselors, not directly to individuals. The data collected, called an “exome,” is about 2 percent of the genome, but includes nearly all genes, including major cancer risk factors like the BRCA genes, as well as susceptibility factors for conditions such as colon cancer and heart disease. Typically, the BRCA test on its own costs anywhere from $400 to $4,000.

The barn doors have been opened and who knows what will come out now. Here’s some more developments on the domestication of DNA.

A team including the scientist who first harnessed the revolutionary CRISPR-Cas9 system for mammalian genome editing has now identified a different CRISPR system with the potential for even simpler and more precise genome engineering.

In a study published in Cell, Feng Zhang and his colleagues at the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research at MIT, with co-authors Eugene Koonin at the National Institutes of Health, Aviv Regev of the Broad Institute and the MIT Department of Biology, and John van der Oost at Wageningen University, describe the unexpected biological features of this new system and demonstrate that it can be engineered to edit the genomes of human cells.

"This has dramatic potential to advance genetic engineering," said Eric Lander, Director of the Broad Institute and one of the principal leaders of the human genome project. "The paper not only reveals the function of a previously uncharacterized CRISPR system, but also shows that Cpf1 can be harnessed for human genome editing and has remarkable and powerful features. The Cpf1 system represents a new generation of genome editing technology."

This is an important article for a number of reasons - first it highlights the future of brain studies and the possibility of using this approach as a foundation of personnel development - initial assessment, to ongoing health-cognitive monitoring and eventual tests for effectiveness of cognitive training. Any personnel research organization should be paying attention to multidisciplinary research skills and infrastructure to anticipate the creation of new research & personnel development. In a decade this form of data science supporting cognitive development and health will be widespread.

There is a strong correspondence between a particular set of connections in the brain and positive lifestyle and behaviour traits, according to a new study by Oxford University researchers.

A team of scientists led by the University’s Centre for Functional MRI of the Brain has investigated the connections in the brains of 461 people and compared them with 280 different behavioural and demographic measures that were recorded for the same participants. They found that variation in brain connectivity and an individual’s traits lay on a single axis — where those with classically positive lifestyles and behaviours had different connections to those with classically negative ones. The findings are published in Nature Neuroscience.

The team used data from the Human Connectome Project (HCP), a $30m brain imaging study funded by the US National Institutes of Health and led by Washington, Minnesota and Oxford Universities. The HCP is pairing up functional MRI scans of 1,200 healthy participants with in-depth data gained from tests and questionnaires. “The quality of the imaging data is really unprecedented,” explains Professor Stephen Smith, who was the lead author of the paper. “Not only is the number of subjects we get to study large, but the spatial and temporal resolution of the fMRI data is way ahead of previous large datasets.” So far, data for 500 subjects have been released to researchers for analysis.

The Oxford team took the data from 461 of the scans and used it to create an averaged map of the brain’s processes across the participants. “You can think of it as a population-average map of 200 regions across the brain that are functionally distinct from each other,” explains Professor Smith. “Then, we looked at how much all of those regions communicated with each other, in every participant.”

The result is a connectome for every subject: a detailed description of how much those 200 separate brain regions communicate with each other, which can be thought of as a map of the brain’s strongest connections. The team then added the 280 different behavioural and demographic measures for each subject and performed a ‘canonical correlation analysis’ between the two data sets — a mathematical process that can unearth relationships between the two large sets of complex variables.

The researchers point out that their results resemble what psychologists refer to as the ‘general intelligence g-factor’: a variable first proposed in 1904 that’s sometimes used to summarise a person’s abilities at different cognitive tasks. While the new results include many real-life measures not included in the g-factor — such as income and life satisfaction, for instance — those such as memory, pattern recognition and reading ability are strongly mirrored.

This is a great milestone - if it works out.

A New Light-Based Memory Chip Could Change the Fundamentals of Computing

Electrons are quick, but they’re not quick enough — in fact they’re holding back the speed of modern computing. Now, a team has developed the world’s first ever light-based memory chip that can store data permanently, and it could help usher in a new era of computing

You might have noticed that the clock speeds of chips haven’t really increased in years. Instead, our computers now come equipped with multi-core processors, farming out tasks to be completed on separate chips rather than crunching more in one place. The reason for that is, oddly, the fact that the transmission of data between memory and chip can’t keep up with higher clock speeds. The speed with which electrons can be sent down the interconnects between memory and processor are slower than the speeds at which faster silicon can chomp through the information — a problem known as the Von Neumann bottle-neck.

… what’s required is a computer architecture that can run on photons alone, with the memory and processor operating with light rather than electricity. And that’s what researchers have been trying to do. Now an international team of researchers has finally cracked at least part of the problem: they’ve created the world’s first light-based memory chip that can store data indefinitely.

The new kind of memory, developed by researchers from the Universities of Oxford and Exeter in the UK and the University of Munster and KIT in Germany, uses what’s known as a phase-change material as the basis of its storage. It uses an alloy of germanium-antimony-tellurium known as GST —the same material that’s used in rewritable CDs and DVDs.

This isn’t technically AI - but the results may provide a real boost to computational capabilities.

Darwin on a chip: New electronic circuits mimic natural networks like the human brain

Researchers have demonstrated working electronic circuits that have been produced in a radically new way, using methods that resemble Darwinian evolution. The size of these circuits is comparable to the size of their conventional counterparts, but they are much closer to natural networks like the human brain. The findings promise a new generation of powerful, energy-efficient electronics.

Speaking of smart AI - here’s a 20 min video demonstrating the power of Watson - this is a MUST VIEW for anyone interested in the trajectory of the enhancement of human memory and analysis. The first demo has Watson discuss the reliability of Wikipedia based on the scientific literature. Watson is no longer just a computer in a room - it’s a set of services in the ‘cloud’. “we do not program Watson, we do not tell it what to think, it learns over iteration”

The Array of Things will be the central nervous system of cities. Without invading your privacy.

I’ve been excited about the Array of Things — a network of beautifully-designed sensors poised to capture and make public real-time, non-personal data about the livability of a city — ever since it (they?) started following me on Twitter in June 2014. A sensor network with a personality and a public service mission — what more could a responsive city want? I was happy to let it follow me, and followed it back so I could read its tweets.

This month, the Array of Things moved several giant steps closer to becoming a crucial general-purpose, worldwide sensor data infrastructure for researchers and policymakers. New money from the National Science Foundation is coming in, new collaborators from around the world are learning about it, and 50 devices will be installed on the streets of Chicago in early 2016, with hundreds more to be added in the years to come. Most importantly, the leaders of the initiative (the City of Chicago and the University of Chicago’s Urban Center for Computation and Data) are committed to openness and public consultation — which means the Array of Things initiative will continue to thrive.

Meet the Array of Things:

Courtesy U Chicago’s Urban Center for Computation and Data. It’s a board on which sensors have been mounted that can measure things that affect life in a city: climate, air quality, light, vibration, numbers of pedestrians or cars passing the node, climate, and ambient noise. The board also provides room for a computer, a communication device, and a power control device. Every one of these things — sensors and other elements of the overall system — can be swapped in or out as needed. Nothing hard-wired or glued together to prevent maintenance (compare this to your smartphone). The shield protecting the devices, the overall skin for each board, has been designed by the School of the Art Institute of Chicago. So these sensor clusters will be eye-catching, attractive structures suited for urban landscapes.

Each node will be mounted on a pole (or building) and connected to a power source and a connection to the Internet. Think of it as a city fitness tracker, continuously measuring the block-by-block physical environment of a city.

Chicago is re-imagining more - here’s what the museum and library can morph into - to enable acquisition to 21st Century literacies.

Welcome to Tinkering Lab, Chicago’s first DIY maker-space for families! Step into the ultimate workshop where we provide the space and resources, and you decide what to do next. We’re talking REAL tools, REAL materials and the freedom to innovate and explore life outside those fancy computer and smartphone screens.

Start by taking our Pegboard Challenge, which features different gears, balls, chutes and other loose parts. Add, remove and tweak to create your own unique cause-and-effect reaction.

Once you’re finished, make your way to the exhibit’s workshop spaces, and jump right in. Trouble getting started? Try a visit to the Tool Bar, and choose from our wide selection of hammers, power drills, screwdrivers, saws and much more. Our professionally trained staff is available to assist when using tools that need extra supervision, and will happily brainstorm if tinkerers hit a creative wall.

Another dot in the emergence of the Smart City - Here’s an initiative that is happening in Toronto.

Steam Labs - Your Community MakerSpace in the Heart of Downtown Toronto

Looking to give his own kids and their friends opportunities to learn about high tech making and inspired by Gever Tulley’s TED talk, Andy Forest along with Marianne Mader started a “Tinkering Club” summer camp in their garage in 2010. As a web developer and a tinkerer, Andy brought some skills with him, but more importantly brought an attitude of honoring kids abilities. The main point was to help them to learn and discover what they wanted to! They made boats and sunk them full of kids in lake Ontario. They made hacked nerf guns with Arduino to be motion-activated. They had a lot of fun and became confident in their abilities as makers!

After the first camp, Andy noticed a big change in his own kids – they started teaching themselves. Through online resources and experimentation, they were learning to make all kinds of things on their own. They would just hand Andy lists of tools and materials that they needed. A child who asks for a soap making supplies and a drawing tablet for Christmas is a life-long learner.

So in the spring of 2012, Andy and Marianne acquired a permanent makerspace location and formed a non-profit organisation. The goal was to provide a place to give kids access to the technologies, materials and skills that they couldn’t get on their own, and teach them that they are capable of anything! They even used their wedding as a fundraiser, and got their first 3D printer. During renovations, they knew they were on to something when kids who had been to the camp would see them walking by their house and would run outside shouting “When are you opening!?”

Since then, thousands of kids have come through the doors and emerged as robotics engineers, wood workers, costumers, video game developers, animators, 3D designers, Minecraft programmers and super heroes! We’re so proud of our army of kids prepared to invent the future together.

Now with STEAMLabs, Andy and Marianne are excited to expand their original vision to bring the world of high tech making to people of all ages and all abilities! They aspire to provide a community hub for all makers to share ideas, problem solve, and to bring their ideas to life.

This is an interesting possibility for a whole range of potential learning.

Intrigued by fossils of a new humanlike species dubbed Homo naledi, researchers and students use 3-D printing to handle the bones and search for clues.

When it comes to understanding human evolution, the trove of fossils from South Africa that was unveiled last week offers much more than another potential new species. What’s raised the excitement level surrounding the finding is that it includes not just fragments of an individual or two but a whole population. There were males and females, infants, children and old people, with the promise of more bones to come.

“We’ve never had such a number of bones,” said University of Wisconsin anthropologist John Hawks, one of the scientists on the expedition. And never before has it been possible for so many researchers to instantly handle replicas of the bones, which are being downloaded and 3-D-printed around the world.

Within a few days of last week’s announcement, Kristina Killgrove, a University of West Florida anthropologist, had printed replicas of jawbones, teeth, and a skull of the upright-walking H. naledi. The expedition members made files available through a website called Morphosource.

Soon her students were able to use them to study the creature’s mysterious mix of humanlike and more primitive traits. “To me, this democratizes the process of paleoanthropology,” she says. “As far as I know this is completely unprecedented.”

Over the course of a year—from January 2014 to March 2015—millions of Americans, hundreds of businesses, and dozens of policymakers weighed in at the Federal Communications Commission in favor of net neutrality. Despite the overwhelming political might of the cable and phone companies that opposed the principle, and despite a prevailing conventional wisdom all last year that it would be “impossible” to beat them, the FCC sided with the public and adopted extremely strong net neutrality rules that should be a global model for Internet freedom. On Monday, dozens of academics, nonprofits, and companies filed legal briefs in court defending that important order.

Because the victory at the FCC is so important for economic policy and was so shocking a political victory, many news organizations have profiled those responsible. Over the past months, in addition to me, many men have received credit—including Federal Communications Commission Chairman Tom Wheeler, President Barack Obama, HBO host John Oliver, and Tumblr CEO David Karp. While these men (and others, especially in the nonprofit community) played critical roles, none deserves more credit than the frequently overlooked women who helped lead the fight. Even if we guys managed to hog the credit afterward, a disproportionate number of women in the public interest, tech, and government communities had the guts and brains to lead the public to victory. They canceled annual vacations, worked around the clock, didn’t see friends and family as often as anyone would want—and ran a brilliant campaign. They should be recognized.

Here are some of the women who worked to preserve the free and open Internet.

This is the future of the workspace - but it is probably a far future for government workers. Despite this it is well worth the look.

It knows where you live. It knows what car you drive. It knows who you’re meeting with today and how much sugar you take in your coffee. (At least it will, after the next software update.) This is the Edge, and it’s quite possibly the smartest office space ever constructed.

A day at the Edge in Amsterdam starts with a smartphone app developed with the building’s main tenant, consulting firm Deloitte. From the minute you wake up, you’re connected. The app checks your schedule, and the building recognizes your car when you arrive and directs you to a parking spot.

Then the app finds you a desk. Because at the Edge, you don’t have one. No one does. Workspaces are based on your schedule: sitting desk, standing desk, work booth, meeting room, balcony seat, or “concentration room.” Wherever you go, the app knows your preferences for light and temperature, and it tweaks the environment accordingly.

This isn’t ready for prime time - but it’s very exciting and may be market ready when the market is ready.

A team of Harvard scientists and engineers has demonstrated a rechargeable battery that could make storage of electricity from intermittent energy sources like solar and wind safe and cost-effective for both residential and commercial use. The new research builds on earlier work by members of the same team that could enable cheaper and more reliable electricity storage at the grid level.

The mismatch between the availability of intermittent wind or sunshine and the variability of demand is a great obstacle to getting a large fraction of our electricity from renewable sources. This problem could be solved by a cost-effective means of storing large amounts of electrical energy for delivery over the long periods when the wind isn't blowing and the sun isn't shining.

In the operation of the battery, electrons are picked up and released by compounds composed of inexpensive, earth-abundant elements (carbon, oxygen, nitrogen, hydrogen, iron and potassium) dissolved in water. The compounds are non-toxic, non-flammable, and widely available, making them safer and cheaper than other battery systems.

"This is chemistry I'd be happy to put in my basement," says Michael J. Aziz, Gene and Tracy Sykes Professor of Materials and Energy Technologies at Harvard Paulson School of Engineering and Applied Sciences (SEAS), and project Principal Investigator. "The non-toxicity and cheap, abundant materials placed in water solution mean that it's safe—it can't catch on fire—and that's huge when you're storing large amounts of electrical energy anywhere near people."

The research appears in a paper published today in the journal Science.

And here one more story about the looming change in energy geopolitics.

Solar has won. Even if coal were free to burn, power stations couldn't compete

As early as 2018, solar could be economically viable to power big cities. By 2040 over half of all electricity may be generated in the same place it’s used. Centralised, coal-fired power is over

Last week, for the first time in memory, the wholesale price of electricity in Queensland fell into negative territory – in the middle of the day.

For several days the price, normally around $40-$50 a megawatt hour, hovered in and around zero. Prices were deflated throughout the week, largely because of the influence of one of the newest, biggest power stations in the state –rooftop solar.

“Negative pricing” moves, as they are known, are not uncommon. But they are only supposed to happen at night, when most of the population is mostly asleep, demand is down, and operators of coal fired generators are reluctant to switch off. So they pay others to pick up their output.

That's not supposed to happen at lunchtime. Daytime prices are supposed to reflect higher demand, when people are awake, office building are in use, factories are in production. That's when fossil fuel generators would normally be making most of their money.

The influx of rooftop solar has turned this model on its head. There is 1,100MW of it on more than 350,000 buildings in Queensland alone (3,400MW on 1.2m buildings across the country). It is producing electricity just at the time that coal generators used to make hay (while the sun shines).

The impact has been so profound, and wholesale prices pushed down so low, that few coal generators in Australia made a profit last year. Hardly any are making a profit this year. State-owned generators like Stanwell are specifically blaming rooftop solar.

Here’s something with even more promise to accelerate the paradigm shift to abundant energy.

Solar Cells Will be Made Obsolete by 3D rectennas aiming at 40-to-90% efficiency

A new kind of nanoscale rectenna (half antenna and half rectifier) can convert solar and infrared into electricity, plus be tuned to nearly any other frequency as a detector.

Right now efficiency is only one percent, but professor Baratunde Cola and colleagues at the Georgia Institute of Technology (Georgia Tech, Atlanta) convincingly argue that they can achieve 40 percent broad spectrum efficiency (double that of silicon and more even than multi-junction gallium arsenide) at a one-tenth of the cost of conventional solar cells (and with an upper limit of 90 percent efficiency for single wavelength conversion).

It is well suited for mass production, according to Cola. It works by growing fields of carbon nanotubes vertically, the length of which roughly matches the wavelength of the energy source (one micron for solar), capping the carbon nanotubes with an insulating dielectric (aluminum oxide on the tethered end of the nanotube bundles), then growing a low-work function metal (calcium/aluminum) on the dielectric and voila--a rectenna with a two electron-volt potential that collects sunlight and converts it to direct current (DC).

"Our process uses three simple steps: grow a large array of nanotube bundles vertically; coat one end with dielectric; then deposit another layer of metal," Cola told EE Times. "In effect we are using one end of the nanotube as a part of a super-fast metal-insulator-metal tunnel diode, making mass production potentially very inexpensive up to 10-times cheaper than crystalline silicon cells."

This is something that is worth monitoring - especially as solar and other renewable energy capabilities spread across the developing world. This represents a fundamentally different type of infrastructure.

The British architect is working on a large-scale project to build 3 droneports to deliver medical supplies and electrical parts

Lord Norman Foster, the British architect who has built iconic buildings like the Gherkin in London, is building the world’s first “droneports” in Rwanda.

The goal is to transport urgent medical supplies and electronic parts to remote parts of the East African country via unmanned flying vehicles or drones.

“There will be about 2.2 billion people in Africa by 2050, or 1 in 4 inhabitants of the planet will be African. How can their infrastructure even think about keeping up with this expansion?” Lord Foster told the Telegraph.

The solution is clear: cargo drone routes have value wherever roads are limited. “Only a third of Africans live within 2km of an all-season road,” Lord Foster said. There are currently no continental motorways, almost no tunnels, and not enough bridges that can reach people living in far-flung areas of the continent.

The architecture firm Foster + Partners are working with Lausanne-based university École Polytechnique Fédérale de Lausanne (EPFL) and its associated initiative Afrotech on the project, which includes three separate droneports and is expected to be built within four years.

The droneports will be designed as a row of vaulted brick structures and will host a health clinic, a digital fabrication shop to make spare drone parts, a post and courier room, and an e-commerce trading hub.

In a short time, more people will stream video online each day than will watch scheduled programs on traditional TV, according to a new study from Ericsson, the Swedish communications company.

As the chart below shows, the percentage of people who say they stream video from services like Netflix, YouTube, and Hulu each day has increased dramatically over the last five years, from about 30% in 2010 to more than 50% this year.

During the same period, the percentage of people who say they watch traditional TV, from providers like Comcast or DirecTV, has dropped by about 10%.

This is something from the World Economic Forum - about one of my great chagrins - Bottled Water - the greatest scam since ‘pet rocks’. Seducing people into a frame of ‘purity’, degrading ubiquitous public access to drinking water by displacing drinking fountains (and in this way possibly masking an implicit racism).

Over the last 15 years, the bottled-water industry has experienced explosive growth, which shows no sign of slowing. In fact, bottled water – including everything from “purified spring water” to flavored water and water enriched with vitamins, minerals, or electrolytes – is the largest growth area in the beverage industry, even in cities where tap water is safe and highly regulated. This has been a disaster for the environment and the world’s poor.

The environmental problems begin early on, with the way the water is sourced. The bulk of bottled water sold worldwide is drawn from the subterranean water reserves of aquifers and springs, many of which feed rivers and lakes. Tapping such reserves can aggravate drought conditions.

But bottling the runoff from glaciers in the Alps, the Andes, the Arctic, the Cascades, the Himalayas, Patagonia, the Rockies, and elsewhere is not much better, as it diverts that water from ecosystem services like recharging wetlands and sustaining biodiversity. This has not stopped big bottlers and other investors from aggressively seeking to buy glacier-water rights. China’s booming mineral-water industry, for example, taps into Himalayan glaciers, damaging Tibet’s ecosystems in the process.

Much of today’s bottled water, however, is not glacier or natural spring water but processed water, which is municipal water or, more often, directly extracted groundwater that has been subjected to reverse osmosis or other purification treatments. Not surprisingly, bottlers have been embroiled in disputes with local authorities and citizens’ groups in many places over their role in water depletion, and even pollution. In drought-seared California, some bottlers have faced protests and probes; one company was even banned from tapping spring water.

Worse, processing, bottling, and shipping the water is highly resource-intensive. It takes 1.6 liters of water, on average, to package one liter of bottled water, making the industry a major water consumer and wastewater generator. And processing and transport add a significant carbon footprint.

Bottled water is compounding the world’s resource and environmental challenges. It is making it harder to deliver potable water to the world’s poor. It delivers no health benefits over clean tap water. And it does not even taste better; indeed, blind taste tests reveal that people cannot tell the difference between bottled and tap water.

This may seem silly - but I want one - I think this would also be great for kids.

An amateur inventor from Sheffield, England, has 3D printed a detachable silicon spoon cover that he believes will change how we eat noodles forever. The NoodVamp, as he calls it, is being backed up by Unilever’s Pot Noodle and is set to be manufactured and delivered over the next two months.

It’s one of the most beloved and ubiquitous meals amongst children and college students, the perfect late-night study snack, an instant dose of comfort, warmth and salty goodness, and all you have to do is add water. Whether you call them instant noodles, ramen, cup’o’noodles, or Pot Noodle, as it is branded in the UK by Unilever, there is a strong chance you’ve got a few squirreled away in your cupboard for those nights when you just can’t bare to open the fridge.

But no matter how much you love them, noodles are subject to one fatal flaw: they’re constantly slipping off of your spoon, splashing your shirt and face with broth, and leaving you with nothing but a big bite of air. Sure, you could switch back and forth between a fork and spoon, or even master chopsticks, but the whole point of Pot Noodles is that they’re supposed to be effortless! And don’t even think about using a spork, that “mediocre compromise” between spoon and fork.

Here’s a looming possibility emerging from the domestication of DNA - something your kids may want for their birthday or christmas.

The pigs are endearing but scientists warn that they may be a distraction from more serious research.

Cutting-edge gene-editing techniques have produced an unexpected byproduct — tiny pigs that a leading Chinese genomics institute will soon sell as pets.

BGI in Shenzhen, the genomics institute that is famous for a series of high-profile breakthroughs in genomic sequencing, originally created the micropigs as models for human disease, by applying a gene-editing technique to a small breed of pig known as Bama. On 23 September, at the Shenzhen International Biotech Leaders Summit in China, BGI revealed that it would start selling the pigs as pets. The animals weigh about 15 kilograms when mature, or about the same as a medium-sized dog.

At the summit, the institute quoted a price tag of 10,000 yuan (US$1,600) for the micropigs, but that was just to "help us better evaluate the market”, says Yong Li, technical director of BGI’s animal-science platform. In future, customers will be offered pigs with different coat colours and patterns, which BGI says it can also set through gene editing.