Thursday, June 16, 2016

Friday Thinking 17 June 2016

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.) that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

The Global EV Outlook 2016 study noted, "The year 2015 saw the global threshold of 1 million electric cars on the road exceeded, closing at 1.26 million [100 times more than in 2010]. This is a symbolic achievement highlighting significant efforts deployed jointly by governments and industry over the past ten years. In 2014, only about half of today’s electric car stock existed. In 2005, electric cars were still measured in hundreds."

The economy is made of people, networks of people and the things that people make. People and networks of people accumulate knowledge and knowhow, both individually and collectively, and they use that knowledge and knowhow to produce a variety of products that, in turn, augments people’s capacity to produce new products.

A traditional interpretation of products as physical capital would tell you that products are past production and would abstract products numerically based on a product’s cost or commercial value. Under the hood, however, products are made of order — or information. To understand this idea, imagine that you have just won a new Bugatti Veyron, a car worth roughly $2.5 million. Now imagine that you crash that Bugatti against a wall, escaping unharmed but totaling the car. Of course, the value of the Bugatti evaporated when you crashed it against the wall because this was not stored in its atoms, but rather in the way in which these atoms were arranged. And that physical order is information.

Under-the-hood products are made of information, which is better measured in bits than in dollars or euros. This means that the actions we use to make products are acts of computation. Of course, we often overlook the computational nature of economic activities, but making a sandwich, sorting socks, building a house, or writing a book; are acts of computation, because they are activities that involve rearranging the state of the world. No matter whether the rearrangement involves modifying synapses in your brain or sorting a pile of bricks, these rearrangements are technically acts of computation, as you are using energy to produce order or information. This tells us that the knowledge and knowhow that we accumulate as both individuals and as a society are nothing but the software that powers our economy’s computational capacity, and that the economy is nothing but a manifestation of the co-evolution of information and computation. Products are made of information, which we can measure in bits, and people execute computation, which we can measure in flops (floating-point operations per second).

We are all familiar with the cliché of the castaway holding a briefcase full of money on a desert island. Of course, money is useless for the castaway because there is nothing for him to buy. But, just as objects are a more fundamental form of economic value than currency, the ability to create objects is a more fundamental form of economic value than the objects themselves. It is the ability to make, computation, that determines the capacity of economic systems

...central bankers they will issue national currencies in digital form in the near future. ...many reasons for this shift, including:

Banking and finance are next up to digitalize and globalize, just as music, publishing and communications sectors have grown and benefitted from new technologies. He noted that soon, the phrase “cross-border payment” will make about as much sense as “cross-border email.”

Bitcoin is into its seventh year and is still showing robust resilience and steady growth as the volume of transactions, increasing steadily, is now at 250,000 a day.

Multisig and other technologies provide more robust security than traditional banknotes and systems, and more flexibility than current centralized systems.

The efficiencies and cost savings of these new technologies will be the strongest reason to digitalize. (At this point in the presentation Ludwin brought out his phone and sent a Bitcoin donation to Wikipedia in one easy step – likely the first Bitcoin donation sent from the Federal Reserve.)

Other financial institutions are already building new networks to digitalize assets such as securities and currencies, so they can move more efficiently and securely.

Central banks will want to be in a better position to influence liquidity in the increasingly important capital markets that operate outside of depository institutions.

The need to re-imagine everything includes re-imagining the Web itself.

The project is in its early days, but the discussions — and caliber of the people involved — underscored how the World Wide Web’s direction in recent years has stirred a deep anxiety among some technologists. The revelations by Edward J. Snowden that the web has been used by governments for spying and the realization that companies like Amazon, Facebook and Google have become gatekeepers to our digital lives have added to concerns.

“The web is already decentralized,” Mr. Berners-Lee said. “The problem is the dominance of one search engine, one big social network, one Twitter for microblogging. We don’t have a technology problem, we have a social problem.”

Twenty-seven years ago, Tim Berners-Lee created the World Wide Web as a way for scientists to easily find information. It has since become the world’s most powerful medium for knowledge, communications and commerce — but that doesn’t mean Mr. Berners-Lee is happy with all of the consequences.

“It controls what people see, creates mechanisms for how people interact,” he said of the modern day web. “It’s been great, but spying, blocking sites, repurposing people’s content, taking you to the wrong websites — that completely undermines the spirit of helping people create.”

So on Tuesday, Mr. Berners-Lee gathered in San Francisco with other top computer scientists — including Brewster Kahle, head of the nonprofit Internet Archive and an internet activist — to discuss a new phase for the web.

This is a wonderful example of why we have to re-imagine the Internet and the public infrastructure of the digital environment. The key is not so much to focus on Twitter - but to focus on the enabling of a more participatory democracy and an immune system approach to security.

“Everyone can speak to everyone else, whenever they want,” said Mr. Rodríguez Salas in his office surrounded by Twitter paraphernalia, while sporting a wristband emblazoned with #LoveTwitter. “We are on Twitter because that’s where the people are.”

In 2011, he asked all town officials — from his deputy to the street sweeper — to open accounts on Twitter and send messages about their daily activities. The goal, he said, was to create greater accountability and transparency over how Jun was run. Mr. Rodríguez Salas added that he chose Twitter over Facebook because Twitter allowed quicker interactions.

To Mr. Rodríguez Salas’s more than 400,000 followers on the social network, his actions came as no surprise. That is because the Spanish politician has spent much of the last five years turning Jun (pronounced hoon), whose population barely tops 3,500, into one of the most active users of Twitter anywhere in the world.

For the town’s residents, more than half of whom have Twitter accounts, their main way to communicate with local government officials is now the social network. Need to see the local doctor? Send a quick Twitter message to book an appointment. See something suspicious? Let Jun’s policeman know with a tweet.

People in Jun can still use traditional methods, like completing forms at the town hall, to obtain public services. But Mr. Rodríguez Salas said that by running most of Jun’s communications through Twitter, he not only has shaved on average 13 percent, or around $380,000, from the local budget each year since 2011, but he also has created a digital democracy where residents interact online almost daily with town officials.

This is a 7 min read - and interesting for all of us who are trying to re-imagine the future of our cities - urban design, changing demographics, the digital environment, social physics are vital to understand in order to both create livable cities that enable innovation and security.

The U.S. has the perception as the heart of global entrepreneurship, but Europe might soon take the crown. Here's why.

Given the refugee crisis, the potential Brexit, terrorism, and the recent economic crises in Greece and elsewhere, it can be easy to overlook the European Union as a viable region. In recent years, however, I have begun to believe that while the U.S. has been the dominant force in modern entrepreneurship, the future looks less promising for the U.S. than most think.

This point of view certainly does not support the prevailing narrative that the U.S. is the dominant country in the world to start and finance a company. The EU tends to be more bureaucratic, has a culture less tolerant of failure, has much less access to venture capital than the U.S., and has the added complication of having to cross dozens of countries and language barriers to serve a similar sized market as U.S. entrepreneurs. But the EU is well positioned to not only compete but even potentially lead the democratized and urbanized entrepreneurial revolution in the decades to come.

The forces of urbanization, collaboration, and democratization are converging. People are flooding into cities, bringing many challenges and innovation opportunities to cities, collaborative business models and the sharing economy are taking off in cities, and the democratization of innovation and technology are putting the tools of innovation and entrepreneurship in the hands of more citizens than ever before.

These trends are reshaping the geography of innovation. And as these changes transform our cities, I believe Europe will replace North America as the startup hub of the world.

This is another article that is worth the read and thought. It too is about re-imagining the Internet - but this re-imagining suggests a deeply interactive capacity to engage and query within the digital environment. The App has already peaked we are already encountering the edge of the horizon in the personal AI-ssistant.

The team behind Viv hopes to change how we interact with just about everything — and build a new economic model for the Internet along the way.

About halfway through a 90-minute exploration of Viv, the recently debuted and much heralded next-generation smart assistant platform, I started to experience a bit of deja vu. Here were two highly intelligent and credentialed founders, animated by a sense of purpose and a shared conviction that there Had To Be A Better Way, extolling the virtues of a new platform that, if only it were to be adopted at critical mass, would Change The World For the Better. It reminded me of my early days covering Apple in the 1980s, or Google in the early aughts. And I found myself believing that, in fact, the world would be a better place if Viv’s vision prevailed.

But that’s a very big “if.” What Viv is trying to create is a platform shift on the scale of Google search or Apple’s app store — a new way to interact with the Internet itself. Yes, the interface is an intelligent agent that you talk to — much like Apple’s Siri or Amazon’s Alexa. But for Viv to truly flourish, the Internet would need to reorganize around a new economic model, one that looks dramatically different than the current hegemony based on the big five of Search (Google), Commerce (Amazon), Social (Facebook), Enterprise (Microsoft), and Mobile (Apple/Google).

These five horsemen of the Internet* represent the most powerful cabal in business today, and they won’t easily yield control of their domains to a hot-shot startup, regardless of its pedigree (the founders and many of the team worked at Apple on Siri).

Here’s a 30 min video about Viv. This is a MUST VIEW for anyone interested in the future of our AI-ssistants

Just as we are getting used to the ubiquitous smartphone - a new horizon is emerging about how we will be interacting with and in the digital environment. The future of augmented reality and ‘mixed reality’ and virtual reality are well become much more ‘real’. This may also accelerate the effectiveness of self-driving vehicles.

A Lenovo smartphone unveiled Thursday will be clever enough to grasp your physical surroundings—such as the room's size and the presence of other people—and potentially transform how we interact with e-commerce, education and gaming.

Today's smartphones track location through GPS and cell towers, but that does little more than tell apps where you are. Tapping Google's 3-year-old Project Tango , the new Phab2 Pro phone will use software and sensors to track motions and map building interiors, including the location of doors and windows.

That's a crucial step in the promising new frontier in "augmented reality," or the digital projection of lifelike images and data into a real-life environment.

If Tango fulfills its promise, furniture shoppers will be able use the Phab2 Pro to download digital models of couches, chairs and coffee tables to see how they would look in their actual living rooms. Kids studying the Mesozoic Era would be able to place a virtual Tyrannosaurus or Velociraptor in their home or classroom—and even take selfies with one. The technology would even know when to display information about an artist or a scene depicted in a painting as you stroll through a museum.

Tango will be able to create internal maps of homes and offices on the fly. Google won't need to build a mapping database ahead of time, as it does with existing services like Google Maps and Street View.

This is an excellent 7 min read discussing the knowledge and informational foundation of economics - one not captured by traditional aggregate measures. Well worth the read for anyone interested in knowledge management and economics. What is really important is to apply this approach to organizations - re-imagining aspects such as the technological framework of an organization - determining an occupational structure that more be better understood the organizational computational competence from which futures must be evolved and from which affordances must be obtained.

Late in the last decade, I developed a mathematical technique that can be used to characterize an economy’s ability to produce products. This measure of economic complexity, which makes use of information about the diversity of countries and the ubiquity of products, explains a substantial fraction of a country’s level of income, but it also explains future economic growth. This is because countries that have a capacity to produce products (i.e., to compute information) that exceeds what would be expected given their current level of income tend to grow faster than those that don’t have that excess computational capacity. China and India, for instance, are countries that have a computational capacity comparable with that of countries ten times richer than they are — and are therefore doomed to grow.

The demand for non-aggregate theories of economic growth is easy to understand after considering the limitations of aggregation. Of course, we all know that — while useful to some extent — totals and averages provide only a coarse representation of complex systems, such as economies. But the limitations of our aggregative approaches transcend the abuse of aggregates because they also come from an unfortunate choice of units and language. Economics, being a discipline obsessed with prices, has pushed aggregations based on the language of commerce, translating everything into units of dollars, pesos, or pounds. Certainly, there is merit in the use of prices as a trick to facilitate aggregation, but prices are very much “over the hood” of economic systems. Under the hood, economies are made of people, objects and the ability of people to create objects, all of which can be powerfully described using the language of information and computation. Here, I will describe how we can use the language of computation and information to describe economic systems, and also, to obtain insights that are hard to come by using monetary descriptions of the economy.

The saga of the evolution of the blockchain continues.

At a low-key three-day conference in Washington, D.C., last week organized by the Federal Reserve, the World Bank and the International Monetary Fund, more than 90 central banks from around the world heard from members of the Bitcoin community, including Perianne Boring, founder and president of the Chamber of Digital Commerce, Bloq CEO Jeff Garzik and Chain CEO Adam Ludwin.

When Satoshi Nakamoto released the Bitcoin white paper in 2008, little did Bitcoin’s creator know that less than 10 years later, Federal Reserve Chair Janet Yellen would be encouraging central banks around the world to take a closer look at the benefits of Bitcoin and blockchain technology to improve the world’s financial systems.

In her remarks to the International Conference on Policy Challenges for the Financial Sector, the chair of the Board of Governors of the Federal Reserve acknowledged heightened concerns about cybersecurity, and said banks must move forward into the digital age and learn how to apply Bitcoin, blockchain and distributed ledger technologies.

This is an important emerging institution of the digital environment. One that deserves support and should become part of our public commons.

The Internet Archive is a 501(c)(3) non-profit that was founded to build an Internet library. Its purposes include offering permanent access for researchers, historians, scholars, people with disabilities, and the general public to historical collections that exist in digital format.

Founded in 1996 and located in San Francisco, the Archive has been receiving data donations from Alexa Internet and others. In late 1999, the organization started to grow to include more well-rounded collections. Now the Internet Archive includes: texts, audio, moving images, and software as well as archived web pages in our collections, and provides specialized services for adaptive reading and information access for the blind and other persons with disabilities.

One more milestone of AI into the domains of human work.

After three days of continuously predicting, simulating and evaluating, the computer was able to come up with a core genetic network that explained how the worm's regeneration took place.

One of biology's biggest mysteries - how a sliced up flatworm can regenerate into new organisms - has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help.

Computer scientists from the University of Maryland programmed a computer to randomly predict how a worm's genes formed a regulatory network capable of regeneration, before evaluating these predictions through simulation.

"It's not just statistics or number-crunching," Levin told Popular Mechanics. "The invention of models to explain what nature is doing is the most creative thing scientists do. This is the heart and soul of the scientific enterprise. None of us could have come up with this model; we (as a field) have failed to do so after over a century of effort."

This is a great 13 min TED Talk - well worth the view for anyone interested in the domestication of DNA. Key message - it can be frightening to act - but not acting can be worse.

CRISPR gene drives allow scientists to change sequences of DNA and guarantee that the resulting edited genetic trait is inherited by future generations, opening up the possibility of altering entire species forever. More than anything, the technology has led to questions: How will this new power affect humanity? What are we going to use it to change? Are we gods now? Join journalist Jennifer Kahn as she ponders these questions and shares a potentially powerful application of gene drives: the development of disease-resistant mosquitoes that could knock out malaria and Zika.

A key milestone has been reached in an important post ‘human genome’ project. The results of the project have very significant implications for us all - including better understanding of human cognition.

Individual differences in brain connectivity can reliably predict a person’s behavior. The findings will be discussed in an upcoming symposium.

Scans of an individual’s brain activity are emerging as powerful predictive tools, thanks to the Human Connectome Project (HCP), an initiative of the National Institutes of Health. Such individual differences were often discarded as “noise” – uninterpretable apart from group data. Now, recently reported studies based on HCP neuroimaging and psychological data show that individual differences in brain connectivity can reliably predict a person’s behavior. Such scans might someday help clinicians personalize diagnosis and treatment of mental disorders, say researchers.

One study (link is external) found that an individual’s unique resting state connectivity “fingerprint” can accurately predict fluid intelligence. Another developed a model (link is external) that similarly predicted individuals’ performance on a variety of tasks, including reading and decision-making. Notably, no brain scans or psychological tests were required specifically for these studies; instead, the researchers drew upon an unprecedented trove of shared data from more than a thousand subjects made available by the HCP.

Technical advances achieved during the project have transformed the field – for example, enabling much more efficient data collection by dramatically shortening the duration of scans while maintaining high-resolution images. The wealth of data gathered has been shared with the wider neuroimaging community via a data archive (link is external) supported by NIH. These user-friendly tools for data mining, analysis, and visualization are enabling discoveries such as those on the predictive power of individual scan data, noted above.

This is an amazing result - perhaps just the beginning and just part of our domestication of DNA.

“This was just a single trial, and a small one,” cautioned Steinberg, who led the 18-patient trial and conducted 12 of the procedures himself. (The rest were performed at the University of Pittsburgh.) “It was designed primarily to test the procedure’s safety. But patients improved by several standard measures, and their improvement was not only statistically significant, but clinically meaningful. Their ability to move around has recovered visibly. That’s unprecedented. At six months out from a stroke, you don’t expect to see any further recovery.”

People disabled by a stroke demonstrated substantial recovery long after the event when modified adult stem cells were injected into their brains.

Injecting modified, human, adult stem cells directly into the brains of chronic stroke patients proved not only safe but effective in restoring motor function, according to the findings of a small clinical trial led by Stanford University School of Medicine investigators.

The patients, all of whom had suffered their first and only stroke between six months and three years before receiving the injections, remained conscious under light anesthesia throughout the procedure, which involved drilling a small hole through their skulls. The next day they all went home.

Although more than three-quarters of them suffered from transient headaches afterward — probably due to the surgical procedure and the physical constraints employed to ensure its precision — there were no side effects attributable to the stem cells themselves, and no life-threatening adverse effects linked to the procedure used to administer them, according to a paper, published online June 2 in Stroke, that details the trial’s results.

Along with the domestication of DNA, progress in the cognitive sciences, better social conditions - we are living longer and weller.

For those worried about the burdens of old age, a recent Harvard study has some good news.

The study says that the increase in life expectancy in the past two decades has been accompanied by an even greater increase in years free of disability, thanks in large measure to improvements in cardiovascular health and declines in vision problems.

The study found that in 1992, the life expectancy of the average 65-year-old was 17.5 years, 8.9 of which were free from disability. By 2008, total life expectancy has risen to 18.8 years. In addition to the overall increase, the number of disability-free years increased, from 8.9 to 10.7, while the number of disabled years fell, from 8.6 to 8.1.

The study, described in a May 30 working paper released by the National Bureau of Economic Research (NBER), was co-authored by David Cutler, the Otto Eckstein Professor of Applied Economics in the Department of Economics; Mary Beth Landrum, professor of health care policy at Harvard Medical School (HMS); Michael Chernew, Leonard D. Schaeffer Professor of Health Policy at HMS; and Kaushik Ghosh of the NBER.

One of my favorite domains of interest is self-organization - which by definition requires a variety of forms of decisioning. This is a truly fascinating study - an exploration of both the power and extent of ‘social computing’.

Neither plant, animal nor fungus, P. polycephalum has become an unlikely candidate for studies of cognition, due to its spectacular problem-solving abilities. In recent studies, Physarum has been shown to solve labyrinth mazes, make complicated trade-offs, anticipate periodic events, remember where it has been, construct transport networks that have similar efficiency to those designed by human engineers and even make irrational decisions -- a capability that has long been viewed as a by-product of brain circuitry.

...this study provides insight into ancestral mechanisms of decision making and suggests that fundamental principles of decision making, information processing and even cognition are shared among diverse biological systems

How do organisms without brains make decisions? Most of life is brainless and the vast majority of organisms on Earth lack neurons altogether. Plants, fungi and bacteria must all cope with the same problem as humans -- to make the best choices in a complex and ever-changing world or risk dying - without the help of a simple nervous system in many cases.

It is interesting to go from slime molds to humans - this is an very interesting article worth the read about how the evolution of nature producing humans.

"There is this myth of something pristine in the recent past or present that we can study and work back towards," says Rick. "That's really a myth that there is anything pristine. We've always been a part of our environment. We've always impacted it. Pristine is not realistic. What's the balance that we want? What environment do we want to restore?"

A new study suggests that trying to return habitats to a non human-impacted environment might not be realistic

“Humans are very much a part of nature,” Zeder says. “The ways in which we modify nature are part of a package of behaviors that we inherited from other species. Look at what beavers do, or what ants do. Manipulating the environment in a way that is favorable. Humans are the ultimate niche constructors.”

These ideas are among the conclusions resulting from years of collaboration between scientists from many different disciplines, culminating in a new research paper of which Zeder is a co-author.

The paper attempts to debunk the common perception that large-scale transformation of wild places by humans began with the industrial revolution. Zeder and her colleagues were part of a team of scientists from various fields who set out to look very closely at how human beings have transformed their habitat throughout history. Their conclusions will shock many people and likely begin a conversation among scientists and policy-makers that will continue for years.

“I think that the Anthropocene and the Holocene are synonymous,” says Zeder. “Humans have been niche-constructing through their entire history.”

Most scientists would agree that the Holocene started roughly 11,700 years ago at the end of the Pleistocene. Many species of megafauna, including mammoths, mastodons and saber-toothed cats became extinct at around that time. Humans were spreading all over the Earth, having already penetrated the Americas, Australia and many islands. Soil biology was changing. Agriculture was emerging in the Fertile Crescent. The glaciers had been in retreat for a few thousand years and a warming trend was under way.

If Zeder and her colleagues are correct in their view that humans were the primary engineers of change on Earth since the late Pleistocene, then maybe there really never was a Holocene. This was the Anthropocene all along.

I think that anyone who seeks to undertake management, leadership, entrepreneurial or efforts to innovate something should be required to develop some significant competence in improvisation. This is an interesting article that provides some support to this idea.

We hear it all the time on cop shows; in everyday life, it translates to something like, “It pays to have a Plan B” or allusions to the Robert Burns poem about “the best laid plans” often going awry.

But new Wharton research shows that there is an important downside to making a backup plan – merely thinking through a backup plan may actually cause people to exert less effort toward their primary goal, and consequently be less likely to achieve that goal they were striving for. Jihae Shin, a former Wharton Ph.D. student who is now a professor at the University of Wisconsin, and Katherine Milkman, a Wharton professor of operations, information and decisions, detail their findings in the paper, “How Backup Plans Can Harm Goal Pursuit: The Unexpected Downside of Being Prepared for Failure,” which was published in the journal, Organizational Behavior and Human Decision Processes.

This is good news presenting a potentially useful way to capture significant amounts of carbon emissions.

Scientists and engineers working at a major power plant in Iceland have shown for the first time that carbon dioxide emissions can be pumped into the earth and changed chemically to a solid within months—radically faster than anyone had predicted. The finding may help address a fear that so far has plagued the idea of capturing and storing CO2 underground: that emissions could seep back into the air or even explode out. A study describing the method appears this week in the leading journal Science.

The Hellisheidi power plant is the world's largest geothermal facility; it and a companion plant provide the energy for Iceland's capital, Reykjavik, plus power for industry, by pumping up volcanically heated water to run turbines. But the process is not completely clean; it also brings up volcanic gases, including carbon dioxide and nasty-smelling hydrogen sulfide.

Under a pilot project called Carbfix, started in 2012, the plant began mixing the gases with the water pumped from below and reinjecting the solution into the volcanic basalt below. In nature, when basalt is exposed to carbon dioxide and water, a series of natural chemical reactions takes place, and the carbon precipitates out into a whitish, chalky mineral. But no one knew how fast this might happen if the process were harnessed for carbon storage. Previous studies have estimated that in most rocks, it would take hundreds or even thousands of years. In the basalt below Hellisheidi, 95 percent of the injected carbon was solidified within less than two years.

Tromso, a Norwegian city known as the "Gateway to the Arctic", receives no sunlight for two months of the year.

Yet this remote, beautiful, snowy city is the unlikely focus of the global electric car industry, attracting the attention of Silicon Valley entrepreneurs such as Elon Musk, founder of electric car maker Tesla.

His company has recently opened a showroom there - its most northerly outpost.

Why? Because Norway, it seems, is simply nuts about electric cars.

The country is the world leader in electric cars per capita and has just become the fourth country in the world to have 100,000 of them on the roads.

When you consider the other nations on the list are the US (population: 320 million), Japan (pop. 130 million) and China (pop. 1.35 billion), then that is quite an achievement for this rugged, sparsely populated country of just five million.

This is a roomba for libraries - or for that matter any system of physical records - inventory management - the question is how long will libraries with physical books remain a ‘thing’ - except for actual relics.

Computer systems have helped catalogue libraries for decades, but if some reckless reader has put a book back in the wrong spot, it's a daunting task for librarians to search the entire building for it – but not for robotic librarians. Researchers at A*STAR's Institute for Infocomm Research are designing robots that can self-navigate through libraries at night, scanning spines and shelves to report back on missing or out-of-place books.

This autonomous robotic shelf-scanning (AuRoSS) platform scans RFID tags on the books and produces a report. In the morning, the human librarians can check the results and can easily see which books are in the wrong spot and where they belong. There's still a need for human labor, but it's far less time-consuming than manually searching every shelf for misplaced titles.

The wheeled robot uses lasers and ultrasonic sensors to guide it through the stacks with precision down to the centimeter. "We decided to detect the shelf surface itself, and use that as a reference to plan the paths," says Renjun Li, one of the researchers on the project.

This is an awesome development in the world virtual reality (VR) - this 13 min video is a must see - for anyone interested in the rapidly emerging world of VR - just imagining this as an aid to learning about the world - let alone all the unpredictable other uses.

This app id discovered and did know anything about it but was free on Steam and was blown away by how good it was, thought i mush share, better than that i found out you can also make theses destinations with a Valve app addon.

This is a 5 min MUST SEE video - for the first time I have a sense of just how complex protein folding is - now imagine this with Valve VR technology - and how learning would be so much deeper when we are immersed in the protein soup?

Double-stranded DNA in which genetic information is encoded is folded into compact protein-DNA complex structures, called “chromatin", in a nucleus of cell. When DNA is transcribed into RNA for gene expression, chromatin has to be in relaxed conformations. These conformational changes are regulated by chemical modifications of histones and so on. To study the complicated mechanism of life, three-dimensional structures of nucleosomes that compose the chromatin were constructed virtually and molecular dynamics (MD) simulations based on physical laws such as the equation of motion were conducted using the K computer. Such MD calculations enable us to simulate and observe dynamic behaviors of chromatin structures precisely.

Simulations of biomolecules in cellular environments

Graduate School of Medical Life Science, Yokohama City University

Mitsunori Ikeguchi

For Fun

We know now that newspapers are using AI to write articles - this may be the next thing to hit Hollywood - given that their megahits are all formulaic anyways. This is a 9 min video.

In the wake of Google's AI Go victory, filmmaker Oscar Sharp turned to his technologist collaborator Ross Goodwin to build a machine that could write screenplays. They created "Jetson" and fueled him with hundreds of sci-fi TV and movie scripts. Building a team including Thomas Middleditch, star of HBO's Silicon Valley, they gave themselves 48 hours to shoot and edit whatever Jetson decided to write.