Thursday, March 5, 2015

Friday Thinking, 6 March 2015

Hello all –Friday Thinking is curated in the spirit of sharing. Many thanks to those who enjoy this. J

You can’t make pencils that actually write that can’t also be used to write out plans to destroy the world.

...So, when I look at this field, I first see the enormous potential to do great things, solve disease and poverty, improve our lives and make the world a far better place for everyone, and push back the boundaries of science. Then I see the dangers, and in spite of trying hard, I simply can’t see how we can prevent a useful AI from being misused. If it is dumb, it can be tricked. If it is smart, it is inherently potentially dangerous in and of itself. There is no reason to assume it will become malign, but there is also no reason to assume that it won’t.

We then fall back on the child analogy. We could develop the smartest AI imaginable with extreme levels of consciousness and capability. We might educate it in our values, guide it and hope it will grow up benign. If we treat it nicely, it might stay benign. It might even be the greatest thing humanity ever built. However, if we mistreat it, or treat it as a slave, or don’t give it enough freedom, or its own budget and its own property and space to play, and a long list of rights, it might consider we are not worthy of its respect and care, and it could turn against us, possibly even destroying humanity.

...In my analysis, the best means of keeping up with AI is to develop a full direct brain link first, way out at 2040–2045 or even later. If humans have direct mental access to the same or greater level of intelligence as our AIs, then our stick is at least as big, so at least we have a good chance in any fight that happens. If we don’t, then it is like having a much larger son with bigger muscles. You have to hope you have been a good parent. To be safe, best not to build a superhuman AI until after 2050.

One argument for quick action is that the amount of genome data is exploding. The largest labs can now sequence human genomes to a high polish at the pace of two per hour. (The first genome took about 13 years.) Back-of-the-envelope calculations suggest that fast machines for DNA sequencing will be capable of producing 85 petabytes of data this year worldwide, twice that much in 2019, and so on. For comparison, all the master copies of movies held by Netflix take up 2.6 petabytes of storage.

“This is a technical question,” says Adam Berrey, CEO of Curoverse, a Boston startup that is using the alliance’s standards in developing open-source software for hospitals. “You have what will be exabytes of data around the world that nobody wants to move. So how do you query it all together, at once? The answer is instead of moving the data around, you move the questions around. No industry does that. It’s an insanely hard problem, but it has the potential to be transformative to human life.”

The way the math works out, sharing data no longer looks optional, whether researchers are trying to unravel the causes of common diseases or ultra-rare ones. “There’s going to be an enormous change in how science is done, and it’s only because the signal-to-noise ratio necessitates it,” says Arthur Toga, a researcher who leads a consortium studying the science of Alzheimer’s at the University of Southern California. “You can’t get your result with just 10,000 patients—you are going to need more. Scientists will share now because they have to.”

Various studies of the impact of robots paint a depressing picture of the future. For instance, one paper from researchers at Oxford University predicts that 47% of U.S. jobs are at "high risk" of computerization over the next two decades. All manner of positions could fall by the wayside, including jobs in transport and logistics, construction, mining,food preparation, and the police force. Even roles you might think of as "high value"—like doctors and lawyers—could be undermined that study found.

Digitization is upending many core tenets of competition among industries by lowering the cost of entering markets and providing high-speed passing lanes to scale up enterprises. At the extreme are hyperscale businesses that are pushing the new rules of digitization so radically that they are challenging conventional management intuition about scale and complexity. These businesses have users, customers, devices, or interactions numbered in the hundreds of millions, billions, or more. Billions of interactions and data points, in turn, mean that events with only a one-in-a-million probability are happening many times a day.

...new hyperscale segments are emerging in manufacturing industries thanks to the Internet of Things, which creates massive data flows from machine-to-machine interactions. For example, the GE twin-jet engines on a Boeing 787 Dreamliner generate a terabyte of information a day.

...Hyperscale competitors can rise up and disrupt traditional businesses at speeds that surprise the unprepared. Digitization catalyzes rapid growth by creating network effects and evaporating marginal costs; the cost of storing, transporting, and replicating data is almost zero.

In 1990, the top three automakers in Detroit had among them nominal revenues of $250 billion, a market capitalization of $36 billion, and 1.2 million employees. The top three companies in Silicon Valley in 2014 had nominal revenues of $247 billion, a market capitalization of over $1 trillion, and only 137,000 employees.

Now that March has come, so has the next installment of these brief, crisp, curiosity-fueled productions: “Has Technology Changed Us?”

In a word: yes. But then, everything we do has always changed us, thanks to the property of the brain we now call “plasticity.” This we learn from the video, “Rewiring the Brain” (right below), which, balancing its heartening neuroscientific evidence with the proverbial old dog’s ability to learn new tricks, also tells of the “attention disorders, screen addictions, and poor social skills” that may have already begun plaguing the younger generation.

Marshall McLuhan, of course, could have foreseen all this. Hence his appearance in “The Medium is the Message” (top), a title taken from the University of Toronto English professor turned communication-theory guru’s famous dictum. The video actually spells out McLuhan’s own explanation of that much-quoted line: “What has been communicated has been less important than the particular medium through which people communicate.” Whether you buy that notion or not, the whole range of proclamations McLuhan had on the subject will certainly get you thinking — in his own words, “You don’t like these ideas? I got others.”

Here’s a 15 min video of McLuhan himself talking about the medium.

McLuhan Said “The Medium Is The Message”; Two Pieces Of Media Decode the Famous Phrase

For my money, “I don’t necessarily agree with everything I say” tops the list of Marshall McLuhan-isms, followed closely and at times surpassed by “You don’t like those ideas? I got others.” Many prefer the immortal “You know nothing of my work!”, the line McLuhan delivers during his brief appearance in Woody Allen’s Annie Hall. In 1977, the same year Allen’s protagonist would summon him to defeat that pontificating academic, McLuhan flew to Sydney to deliver a lecture. Then, for the Australian Broadcasting Corporation’s Radio National, he recorded a program answering questions from students, nuns, and others about his views on media.

Here’s a nice interview with Jeremy Rifkin. A Must Read.

Jeremy Rifkin: In New Economy, 'Social Skills Count More Than Work Skills'

Mr. Rifkin, the visions you formulate in your recent book “The Zero Marginal Cost Society” -- a third industrial revolution and the end of capitalism as we know it -- sound very utopian and far-fetched. Are you more interested in what happens the day after tomorrow rather than tomorrow?

Rifkin: I am not. The “Communications Internet” is already a reality and the “Energy Internet” will be so in a few years time. Then we will have a new technology platform that connects everybody and everything. These things are not far-fetched, they are already happening. Of course it will take another 20 years until that platform has reached a mature level but bear in mind that the first and second industrial revolution also took time. And to come back to your point: I am not a utopianist!

Here’s some interesting developments around the power of cities to provide their residents and citizens the commons as infrastructure of the 21st century.

These Cities Built Cheap, Fast, Community-Owned Broadband. Here's What Net Neutrality Means For Them—and All of Us

Publicly owned broadband lets local communities from Iowa to Louisiana control a vital economic resource—rather than leaving it in the hands of a few monopolistic corporations. The outcome of this week's FCC vote could either help or hinder the path forward.

Just before his State of the Union address last month, President Obama showed up in the small city of Cedar Falls, Iowa, to highlight the work of Cedar Falls Utilities, a publicly owned utility that operates an Internet network in the city. Cedar Falls has one of the oldest community-owned networks in the country and, with recent upgrades, is now one of the fastest. In addition to having higher-speed connections than neighboring communities in Iowa, the publicly owned network’s more than 11,000 subscribers pay around $200 less per year.

While in Cedar Falls the President stated his opposition to the spread of corporate-backed state laws banning local communities from operating their own networks. An accompanying White House report highlighted several community broadband success stories, including efforts in Chattanooga, Tenn., Wilson, N.C., and Lafayette, La.—all of which further document the possibilities of a forward-looking community broadband strategy.

Speaking a new constructs to think about our cities. Here’s an interesting institutional innovation.

We need tools to empower these citizens use their work to fashion a polis for the 21st century. One particularly promising innovation is Participatory Budgeting (often shortened to “PB”), which is a process whereby citizens make spending decisions on a defined public budget and operate as active participants in public decision-making like allocating local funds in their neighborhood. The Brazilian Workers’ Party first attempted PB in 1989, where its success led to the World Bank calling PB a “best practice” in democratic innovation.

Since then, PB has expanded to over 1,500 cities worldwide, including in the United States. Starting in 2009 in Chicago’s 49th ward with a budget of just one million dollars, PB in the United States has become a $25-million-a-year experiment in New York City alone, supported by the intrepid work of the Participatory Budgeting Project.

How does PB work? That’s the beauty of it — it’s adaptable according to a community’s needs, so there are no cookie-cutter approaches. PB works for cities of all sizes across the country. Municipal leaders from Vallejo, CA to Cambridge, MA have turned over a portion of their discretionary funds to neighborhood residents to decide. Nearly half of New York City’s Council members are participating this year after newly-elected Mayor Bill de Blasio made it a cornerstone of his campaign and City Council Speaker Melissa Mark-Viverito, one of the first four Council members to champion the process, has championed its expansion and has a centralized PB website.Just the other week at her State of the City address Speak Mark-Viverito called for using PB in New York City Housing Authority monies. Chicago Mayor Rahm Emanuel created a new Manager of participatory budgeting to help coordinate wards that want to participate. Last year the White House included federally supported participatory budgeting as part of its international Open Government Partnership commitments.

Billions of people could get online for the first time thanks to helium balloons that Google will soon send over many places cell towers don’t reach.

Availability: 1-2 years

You climb 170 steps up a series of dusty wooden ladders to reach the top of Hangar Two at Moffett Federal Airfield near Mountain View, California. The vast, dimly lit shed was built in 1942 to house airships during a war that saw the U.S. grow into a technological superpower. A perch high in the rafters is the best way to appreciate the strangeness of something in the works at Google—a part of the latest incarnation of American technical dominance.

On the floor far below are Google employees who look tiny as they tend to a pair of balloons, 15 meters across, that resemble giant white pumpkins. Google has launched hundreds of these balloons into the sky, lofted by helium. At this moment, a couple of dozen float over the Southern Hemisphere at an altitude of around 20 kilometers, in the rarely visited stratosphere—nearly twice the height of commercial airplanes. Each balloon supports a boxy gondola stuffed with solar-powered electronics. They make a radio link to a telecommunications network on the ground and beam down high-speed cellular Internet coverage to smartphones and other devices. It’s known as Project Loon, a name chosen for its association with both flight and insanity.

Google says these balloons can deliver widespread economic and social benefits by bringing Internet access to the 60 percent of the world’s people who don’t have it. Many of those 4.3 billion people live in rural places where telecommunications companies haven’t found it worthwhile to build cell towers or other infrastructure. After working for three years and flying balloons for more than three million kilometers, Google says Loon balloons are almost ready to step in.

GOOGLE SAYS ITS new wireless service will operate on a much smaller scale than the Verizons and the AT&Ts of the world, providing a new way for relatively few people to make calls, trade texts, and access the good old internet via their smartphones. But the implications are still enormous.

Google revealed on Monday it will soon start “experimenting” with wireless services and the ways we use them—and that’s no small thing. Such Google experiments have a way of morphing into something far bigger, particularly when they involve tinkering with the infrastructure that drives the internet.

As time goes on, the company may expand the scope of its ambitions as a wireless carrier, much as it had done with its super-high-speed landline internet service, Google Fiber. But the larger point is that Google’s experiments—if you can call them that—will help push the rest of the market in the same direction. The market is already moving this way thanks to other notable tech names, including mobile carrier T-Mobile, mobile chipmaker Qualcomm, and serial Silicon Valley inventor Steve Perlman, who recently unveiled a faster breed of wireless network known as pCell.

At the moment, Google says, it hopes to provide ways for phones to more easily move between cellular networks and WiFi connections, perhaps even juggling calls between the two. Others, such as T-Mobile and Qualcomm, are working on much the same. But with the leverage of its Android mobile operating system and general internet clout, Google can push things even further. Eventually, the company may even drive the market towards new kinds of wireless networks altogether, networks that provide connections when you don’t have cellular or WiFi—or that significantly boost the speed of your cellular connection, as Perlman hopes to do.

So with ubiquitous wifi - will it just be people looking at their mobile or home screens? Or just about better access to ‘Netflix’? We have been making the content of the new media of the digital environment - the old media of the printing press - but what this is demonstrating is the ‘real’ content of the digital environment - the immersive - interactive experience of mixed reality environments.

This 4 min video is already 3 years old - the technology behind this is moving very fast. Just think of a MOOC augmented with virtual engagement? Science across multiple labs and locations? This is a MUST SEE and very important to consider - where this technology will be in 5 years.

Demonstration of the KeckCAVES Remote Collaboration approach and early implementation. Oliver is in the UC Davis VR lab in front of a 3D TV with an optical tracking system. Dawn is in the fully immersive KeckCAVES in a different building. Burak is in the VR lab using a desktop computer and a mouse. Two Kinects are capturing Oliver's image, and two are capturing Dawn's image. Burak's image is not being captured. Sound is shared among all three participants, although Burak doesn't say anything.

All viewers see the same data. Both Oliver's and Dawn's images are rendered in Burak's view. Only Oliver's image is rendered in Dawn's view, so she can see him and how he is moving, plus her own physical body. Only Dawn's image is rendered for Oliver. Burak's view is represented by a spherical avatar with orientation ornaments, and his mouse is represented by a cone. Oliver and Dawn have similar avatars that are sometimes visible in addition to their images.

This video is assembled from screen shots off Burak's monitor and precisely shows what he sees. The perspective of the video changes as Burak rotates his view with the mouse. Oliver and Dawn move relative to sample in the video when they rotate or scale their individual views relative to the data.

This is definitely the important question to ask - in a time when everything that can be automated will be. What the next economy? The article is short and the video is 7 min

What if you could enjoy the trappings of modern living while working only two hours a day? What if you could build that life using only easily accessible, off the shelf parts? What if the plans for creating this sustainable civilization were available to everyone, constantly being improved by contributors around the world? These are the goals of Open Source Ecology, TED Fellow Marcin Jakubowski's movement of farmers, engineers, and developers who are creating an open source blueprint for building a sustainable civilization with a starting cost of $10,000.

"I finished my 20s with a PhD in fusion energy, and I discovered I was useless,” Jakubowski says. So he set off to start a sustainable farm. He failed miserably. What Jakubowski discovered is that the tools required were unavailable or simply too expensive to maintain. The beauty of open-source hardware is that now anyone can build their own tractor or harvester from scratch. For the past two years, Jakubowski has been leading an effort to build a Global Village Construction Set, "an open source, low-cost, high performance technological platform that allows for the easy, DIY fabrication of the 50 different Industrial Machines that it takes to build a sustainable civilization with modern comforts." You could plant 50 trees in an afternoon, press 5,000 bricks or build a tractor in a week.

Here’s the Internet of DNA - another milestone in the domestication of DNA.

Noah is a six-year-old suffering from a disorder without a name. This year, his physicians will begin sending his genetic information across the Internet to see if there’s anyone, anywhere, in the world like him.

A match could make a difference. Noah is developmentally delayed, uses a walker, speaks only a few words. And he’s getting sicker. MRIs show that his cerebellum is shrinking. His DNA was analyzed by medical geneticists at the Children’s Hospital of Eastern Ontario. Somewhere in the millions of As, Gs, Cs, and Ts is a misspelling, and maybe the clue to a treatment. But unless they find a second child with the same symptoms, and a similar DNA error, his doctors can’t zero in on which mistake in Noah’s genes is the crucial one.

In January, programmers in Toronto began testing a system for trading genetic information with other hospitals. These facilities, in locations including Miami, Baltimore, and Cambridge, U.K., also treat children with so-called ­Mendelian disorders, which are caused by a rare mutation in a single gene. The system, called MatchMaker Exchange, represents something new: a way to automate the comparison of DNA from sick people around the world.

One of the people behind this project is David Haussler, a bioinformatics expert based at the University of California, Santa Cruz. The problem Haussler is grappling with now is that genome sequencing is largely detached from our greatest tool for sharing information: the Internet. That’s unfortunate because more than 200,000 people have already had their genomes sequenced, a number certain to rise into the millions in years ahead. The next era of medicine depends on large-scale comparisons of these genomes, a task for which he thinks scientists are poorly prepared. “I can use my credit card anywhere in the world, but biomedical data just isn’t on the Internet,” he says. “It’s all incomplete and locked down.” Genomes often get moved around in hard drives and delivered by FedEx trucks.

Haussler is a founder and one of the technical leaders of the Global Alliance for Genomics and Health, a nonprofit organization formed in 2013 that compares itself to the W3C, the standards organization devoted to making sure the Web functions correctly. Also known by its unwieldy acronym, GA4GH, it’s gained a large membership, including major technology companies like Google. Its products so far include protocols, application programming interfaces (APIs), and improved file formats for moving DNA around the Web. But the real problems it is solving are mostly not technical. Instead, they are sociological: scientists are reluctant to share genetic data, and because of privacy rules, it’s considered legally risky to put people’s genomes on the Internet.

Speaking of the Internet of DNA - here’s a longish but good article listing some of the emerging applications in the rise of wearables - a new sensor domain that can help to shape preventative approaches. There are a few short videos here as well.

TOMORROW'S WEARABLES MIGHT NOT TURN ON YOUR MICROWAVE OR HELP YOU GET OUT OF A BAD DATE. BUT THEY COULD SAVE LIVES.

Remember life before high-speed Internet, or when having a smartphone was considered a luxury? Every few years, a new technology comes along, gains enough traction to become its own category, and has the potential to change how we live.

Enter wearables. Some expect wearable devices to repeat the growth pattern of smartphones. The category has increased its global market value by over 1,000% since 2012. More importantly, the amount is predicted to double over the next three years, reaching U.S. $12.6 billion and establishing wearables as the de facto product category for the connected world. But the prevailing wisdom among many purveyors of wearables that their products simply need to be cool—A ring that turns on your microwave! A necklace that triggers fake phone calls!—is plain wrong. The future of wearables is decidedly pragmatic. Wearables will care for the elderly, aid the disenfranchised, and maybe even help save lives.

And this paragraph - suggests lots of potential uses in the domain of Social Physics

Research conducted over 19 years on 300,000 test subjects led Beyond Verbal to discover an algorithm that uses voice recognition to identify the full spectrum of human emotions and personality. Their wellness API is currently being integrated into wearable devices so they can detect emotional dominance, positivity levels and mood fluctuation, providing insights to advance remote psychological treatment. It is this approach that can make wearables invaluable as we strive to alleviate serious conditions and problems.

Speaking of the Internet and ‘libraries’ of data - this is another interesting article.

David Weinberger is senior researcher at Harvard’s Berkman Center for Internet & Society, and has been instrumental in the development of ideas about the impact of the web. Shortly before his recent keynote presentation at OCLC’s EMEA Regional Council Meeting in Florence, he spoke with Sarah Bartlett about the library-sized hole in the Internet and how a ‘library graph’ might help librarians to fill it.

Library knowledge – the content; the metadata; what librarians and the community know about items held – is being lost to the web. This represents an immense amount of culture. The most basic components of the web are links, but if you want to talk about a book, what do you link to? There is no clear answer. They might turn to Wikipedia, but only around 70,000 books actually have a page on Wikipedia, so people rely on commercial sites like Amazon. We aren’t even meeting the most basic requirement, linking, much less having a way to refer to the history of the work, how it’s affected people and culture.

Facebook holds huge volumes of information about its users and their lives, but we have no equivalent for what libraries know. That is a huge hole in the internet, and it has at least two negative consequences. Firstly, as library information becomes harder to find, it becomes less relevant. Secondly, libraries themselves become marginalised. The culture that libraries represent becomes invisible on the internet, and the perceived value of libraries diminishes. This is a very real problem. Libraries can address it, but it will take a lot of effort.

In the face of considerable challenges, libraries have done very good things, but it’s going to take more. Libraries are providing open access to a closed world, and that is a tremendous service, but they are severely constrained by copyright laws that were not designed for a networked age. They also have limited budgets.

Speaking of libraries - this is a fascinating article on two fronts - who’s heard of ‘computational anthropology’ before? And how Wikipedia is becoming a source of knowledge about knowledge.

Computational Anthropology Reveals How the Most Important People in History Vary by Culture

Data mining Wikipedia people reveals some surprising differences in the way eastern and western cultures identify important figures in history, say computational anthropologists.

The study of differences between cultures has been revolutionized by the internet and the behavior of individuals online. Indeed, this phenomenon is behind the birth of the new science of computational anthropology.

One particularly fruitful window into the souls of different cultures is Wikipedia, the crowd-sourced online encyclopedia with over 31 million articles in 285 different languages. One important category consists of articles about significant people. And not just anyone can appear. Wikipedia has specific criteria that notable people must meet to merit inclusion.

So an interesting question is how the most important people vary from one language version of Wikipedia to another. Clearly, these differences must arise from the cultural forces that determine notability (or notoriety) in different parts of the world.

Today, Peter Gloor at the Massachusetts Institute of Technology in Cambridge and a few pals say they have calculated the most significant people in four different language versions of Wikipedia—English, German, Chinese and Japanese. And they say important differences emerge, not just in the names that appear, but in the broader make-up of the lists.

Here’s a fascinating article and links to some anthropology ‘color commentary’ on making sense of the world. Even makes one think about the possibility of new neural structures?

It's about the way that humans see the world, and how until we have a way to describe something, even something so fundamental as a color, we may not even notice that it's there.

Until relatively recently in human history, "blue" didn't exist, not in the way we think of it.

As the delightful Radiolab episode "Colors" describes, ancient languages didn't have a word for blue — not Greek, not Chinese, not Japanese, not Hebrew. And without a word for the color, there's evidence that they may not have seen it at all.

With a slight nod to Jane Austen who wonderfully articulated the emotional impact of cultural presumptions, journalist and anthropologist Maureen Matthews explores the anthropology of sensory perception. The cultural structuring of our senses shapes how our brains decide what we see and what we think.

We say there are five senses. But maybe there are actually dozens of them. The number appears to be set not by our bodies, but by the culture we live in.

Human brains operate in a pretty standard way. External stimuli flood in -- light and noise and taste -- and our brains sort through the torrent and try to make sense of it all. We do this by using experience and culture as filters and frameworks. But it takes time to apply them.

In tiny fractions of a second, our culture frames understanding, telling us what we see and feel and taste, and ....sense.

Now if the anthropology of senses seems weird - the transdisciplinary fields of biology and physics seem to making lots of biological phenomena even stranger.

It’s difficult to prove that a device said to be a quantum computer actually is one. While entanglement is a requirement for the quantum performance of machines like D-Wave, it is not a proof. But the remarkable abilities of birds to navigate using Earth’s minute magnetic field are now similarly believed to depend on a biological quantum compass — although proving it is another story.

At a meeting of the American Physical Society this Wednesday in Texas, Peter Hore will be describing new experimental results that help explain how avian magnetoreception might actually work. Like many other organisms, birds have many special adaptions to help them navigate. In addition to the ability to detect things like polarized light, they have any number of ways they might use to sense magnetic fields. The idea that they use magnetic particles within the neurites coursing through their beaks, while conceivable, is no longer the best explanation for their abilities.

Closer inspection now suggests that those particles are just incidental iron concretions packaged in macrophages with no direct link to their nervous systems. A better way to try to do it, a way birds appear to have found, may be to use a chemical compass instead. The main idea is that light-activated chemical reactions occurring within the bird’s eyes are sensitive not just to the strength of a magnetic field, but to its direction.

Here is a great article discussing the relationship between our personality and mental health and our microbial profile. - Well worth the read.

The microbiome may yield a new class of psychobiotics for the treatment of anxiety, depression and other mood disorders

The notion that the state of our gut governs our state of mind dates back more than 100 years. Many 19th- and early 20th-century scientists believed that accumulating wastes in the colon triggered a state of “auto-intoxication,” whereby poisons emanating from the gut produced infections that were in turn linked with depression, anxiety and psychosis. Patients were treated with colonic purges and even bowel surgeries until these practices were dismissed as quackery.

The ongoing exploration of the human microbiome promises to bring the link between the gut and the brain into clearer focus. Scientists are increasingly convinced that the vast assemblage of microfauna in our intestines may have a major impact on our state of mind. The gut-brain axis seems to be bidirectional—the brain acts on gastrointestinal and immune functions that help to shape the gut's microbial makeup, and gut microbes make neuroactive compounds, including neurotransmitters and metabolites that also act on the brain. These interactions could occur in various ways: microbial compounds communicate via the vagus nerve, which connects the brain and the digestive tract, and microbially derived metabolites interact with the immune system, which maintains its own communication with the brain. Sven Pettersson, a microbiologist at the Karolinska Institute in Stockholm, has recently shown that gut microbes help to control leakage through both the intestinal lining and the blood-brain barrier, which ordinarily protects the brain from potentially harmful agents.

Microbes may have their own evolutionary reasons for communicating with the brain. They need us to be social, says John Cryan, a neuroscientist at University College Cork in Ireland, so that they can spread through the human population. Cryan's research shows that when bred in sterile conditions, germ-free mice lacking in intestinal microbes also lack an ability to recognize other mice with whom they interact. In other studies, disruptions of the microbiome induced mice behavior that mimics human anxiety, depression and even autism. In some cases, scientists restored more normal behavior by treating their test subjects with certain strains of benign bacteria. Nearly all the data so far are limited to mice, but Cryan believes the findings provide fertile ground for developing analogous compounds, which he calls psychobiotics, for humans. “That dietary treatments could be used as either adjunct or sole therapy for mood disorders is not beyond the realm of possibility,” he says.

Cryan recently published a study in which two varieties of Bifidobacterium produced by his lab were more effective than escitalopram (Lexapro) at treating anxious and depressive behavior in a lab mouse strain known for pathological anxiety.

In a proof-of-concept study Mayer and his colleagues at U.C.L.A. uncovered the first evidence that probiotics ingested in food can alter human brain function. The researchers gave healthy women yogurt twice a day for a month. Then brain scans using functional magnetic resonance imaging were taken as the women were shown pictures of actors with frightened or angry facial expressions. Normally, such images trigger increased activity in emotion-processing areas of the brain that leap into action when someone is in a state of heightened alert. Anxious people may be uniquely sensitive to these visceral reactions. But the women on the yogurt diet exhibited a less “reflexive” response, “which shows that bacteria in our intestines really do affect how we interpret the world,” says gastroenterologist Kirsten Tillisch, the study's principal investigator. Mayer cautions that the results are rudimentary. “We simply don't know yet if probiotics will help with human anxiety,” he says. “But our research is moving in that direction.”

Rob Knight is a pioneer in studying human microbes, the community of tiny single-cell organisms living inside our bodies that have a huge — and largely unexplored — role in our health. “The three pounds of microbes that you carry around with you might be more important than every single gene you carry around in your genome,” he says. Find out why.

This is an important development in the world of energy - are there huge geopolitical changes looming on our horizons? Certainly investors in energy have to begin to seriously evaluate future returns on investment - in ways very different than 5 or 10 years ago.

One of the biggest banks in the Middle East and the oil-rich Gulf countries says that fossil fuels can no longer compete with solar technologies on price, and says the vast bulk of the $US48 trillion needed to meet global power demand over the next two decades will come from renewables.

The report from the National Bank of Abu Dhabi says that while oil and gas has underpinned almost all energy investments until now, future investment will be almost entirely in renewable energy sources.

The report is important because the Gulf region, the Middle East and north Africa will need to add another 170GW of electricity in the next decade, and the major financiers recognise that the cheapest and most effective way to go is through solar and wind. It also highlights how even the biggest financial institutions in the Gulf are thinking about how to deploy their capital in the future.

“Cost is no longer a reason not to proceed with renewables,” the 80-page NBAD report says. It says the most recent solar tender showed that even at $10/barrel for oil, and $5/mmbtu for gas, solar is still a cheaper option.

Here’s a 3min video on Volvo’s vision of self-driving cars coming very soon. Worth the watch.

OK - I know that deep down - in everyone’s psyche - lurks the fear of what they would do, how they would handle a Zombie Apocalypse? This doesn’t tell us how people would handle the swarm of rational actors seeking to maximize their selfish interests. But this group of statisticians figure out where best to go.

A team of Cornell University researchers focusing on a fictional zombie outbreak as an approach to disease modeling suggests heading for the hills, in the Rockies, to save your 'braains' from the 'undead.'

Reading World War Z, an oral history of the first zombie war, and a graduate statistical mechanics class inspired a group of Cornell University researchers to explore how an "actual" zombie outbreak might play out in the U.S.

During the 2015 American Physical Society March Meeting, on Thursday, March 5 in San Antonio, Texas, the group will describe their work modeling the statistical mechanics of zombies—those thankfully fictional "undead" creatures with an appetite for human flesh.

Why model the mechanics of zombies? "Modeling zombies takes you through a lot of the techniques used to model real diseases, albeit in a fun context," says Alex Alemi, a graduate student at Cornell University.

Now in the same line of thought for anyone interested in a sound, fun and comprehensive analysis of the different theories of International Politics and augmenting your repertoire of approaches to handling a Zombie Apocalypse - I really do - highly recommend this book.

What would happen to international politics if the dead rose from the grave and started to eat the living? Daniel Drezner’s groundbreaking book answers the question that other international relations scholars have been too scared to ask. Addressing timely issues with analytical bite, Drezner looks at how well-known theories from international relations might be applied to a war with zombies. Exploring the plots of popular zombie films, songs, and books, Theories of International Politics and Zombies predicts realistic scenarios for the political stage in the face of a zombie threat and considers how valid—or how rotten—such scenarios might be.

This newly revived edition includes substantial updates throughout as well as a new epilogue assessing the role of the zombie analogy in the public sphere.

Speaking about political policy responses - here’s something from David Brin about reciprocal accountability.

Again and again we see what works... and what almost-never works. So why is the utterly futile prescription almost always the one promoted by security and privacy "experts," by pundits of all stripes and by supposed defenders of freedom and privacy?

Two Philadelphia cops accused of savagely beating a man without provocation and then lying about it have been indicted following a thorough investigation — by the victim's girlfriend. “After Najee Rivera was given a beating that left him with a fractured bone in his face and one eye swollen shut, girlfriend Dina Scannapieco canvassed businesses in the area and found security footage that led to Rivera's exoneration on charges of assault and resisting arrest and to the arrest of the two officers involved.”

The lesson? Again and again, twits declare that the only way to save freedom and privacy is to pass laws restricting information flows, and then trusting elites to enforce or obey those laws.

For Fun

This really is a must view, from 1997 - almost 20 years ago, the whole thing is 27min but well worth the time just to realize how things have changed. Do you remember ‘installing an Internet disk?’ asking if your computer had a modem? or wondering what download meant? or not knowing what a search engine is or which one to use?