Singularity Hubhttps://singularityhub.com
News and Insights on Technology, Science, and the Future from Singularity UniversityMon, 19 Nov 2018 15:00:37 +0000en-UShourly1https://wordpress.org/?v=4.9.8https://singularityhub.com/wp-content/uploads/2017/09/index.pngSingularity Hubhttps://singularityhub.com
32324183809The SpiNNaker Supercomputer, Modeled After the Human Brain, Is Up and Runninghttps://singularityhub.com/2018/11/19/the-million-core-spinnaker-supercomputer-is-up-and-running/
Mon, 19 Nov 2018 15:00:37 +0000https://singularityhub.com/?p=126184We’ve long used the brain as inspiration for computers, but the SpiNNaker supercomputer, switched on this month, is probably the closest we’ve come to recreating it in silicon. Now scientists hope to use the supercomputer to model the very thing that inspired its design.

The brain is the most complex machine in the known universe, but that complexity comes primarily from its architecture rather than the individual components that make it up. Its highly interconnected structure means that relatively simple messages exchanged between billions of individual neurons add up to carry out highly complex computations.

That’s the paradigm that has inspired the ‘Spiking Neural Network Architecture” (SpiNNaker) supercomputer at the University of Manchester in the UK. The project is the brainchild of Steve Furber, the designer of the original ARM processor. After a decade of development, a million-core version of the machine that will eventually be able to simulate up to a billion neurons was switched on earlier this month.

The idea of splitting computation into very small chunks and spreading them over many processors is already the leading approach to supercomputing. But even the most parallel systems require a lot of communication, and messages may have to pack in a lot of information, such as the task that needs to be completed or the data that needs to be processed.

In contrast, messages in the brain consist of simple electrochemical impulses, or spikes, passed between neurons, with information encoded primarily in the timing or rate of those spikes (which is more important is a topic of debate among neuroscientists). Each neuron is connected to thousands of others via synapses, and complex computation relies on how spikes cascade through these highly-connected networks.

The SpiNNaker machine attempts to replicate this using a model called Address Event Representation. Each of the million cores can simulate roughly a million synapses, so depending on the model, 1,000 neurons with 1,000 connections or 100 neurons with 10,000 connections. Information is encoded in the timing of spikes and the identity of the neuron sending them. When a neuron is activated it broadcasts a tiny packet of data that contains its address, and spike timing is implicitly conveyed.

By modeling their machine on the architecture of the brain, the researchers hope to be able to simulate more biological neurons in real time than any other machine on the planet. The project is funded by the European Human Brain Project, a ten-year science mega-project aimed at bringing together neuroscientists and computer scientists to understand the brain, and researchers will be able to apply for time on the machine to run their simulations.

Importantly, it’s possible to implement various different neuronal models on the machine. The operation of neurons involves a variety of complex biological processes, and it’s still unclear whether this complexity is an artefact of evolution or central to the brain’s ability to process information. The ability to simulate up to a billion simple neurons or millions of more complex ones on the same machine should help to slowly tease out the answer.

Even at a billion neurons, that still only represents about one percent of the human brain, so it’s still going to be limited to investigating isolated networks of neurons. But the previous 500,000-core machine has already been used to do useful simulations of the Basal Ganglia—an area affected in Parkinson’s disease—and an outer layer of the brain that processes sensory information.

The full-scale supercomputer will make it possible to study even larger networks previously out of reach, which could lead to breakthroughs in our understanding of both the healthy and unhealthy functioning of the brain.

And while neurological simulation is the main goal for the machine, it could also provide a useful research tool for roboticists. Previous research has already shown a small board of SpiNNaker chips can be used to control a simple wheeled robot, but Furber thinks the SpiNNaker supercomputer could also be used to run large-scale networks that can process sensory input and generate motor output in real time and at low power.

That low power operation is of particular promise for robotics. The brain is dramatically more power-efficient than conventional supercomputers, and by borrowing from its principles SpiNNaker has managed to capture some of that efficiency. That could be important for running mobile robotic platforms that need to carry their own juice around.

This ability to run complex neural networks at low power has been one of the main commercial drivers for so-called neuromorphic computing devices that are physically modeled on the brain, such as IBM’s TrueNorth chip and Intel’s Loihi. The hope is that complex artificial intelligence applications normally run in massive data centers could be run on edge devices like smartphones, cars, and robots.

But these devices, including SpiNNaker, operate very differently from the leading AI approaches, and its not clear how easy it would be to transfer between the two. The need to adopt an entirely new programming paradigm is likely to limit widespread adoption, and the lack of commercial traction for the aforementioned devices seems to back that up.

At the same time, though, this new paradigm could potentially lead to dramatic breakthroughs in massively parallel computing. SpiNNaker overturns many of the foundational principles of how supercomputers work that make it much more flexible and error-tolerant.

For now, the machine is likely to be firmly focused on accelerating our understanding of how the brain works. But its designers also hope those findings could in turn point the way to more efficient and powerful approaches to computing.

]]>126184Follow the Data? Investigative Journalism in the Age of Algorithmshttps://singularityhub.com/2018/11/18/follow-the-data-investigative-journalism-in-the-age-of-algorithms/
Sun, 18 Nov 2018 15:00:34 +0000https://singularityhub.com/?p=126007You probably have a picture of a typical investigative journalist in your head. Dogged, persistent, he digs through paper trails by day and talks to secret sources in abandoned parking lots by night. After years of painstaking investigation, the journalist uncovers convincing evidence and releases the bombshell report. Cover-ups are exposed, scandals are surfaced, and sometimes the guilty parties are brought to justice.

This is a formula we all know and love. But what happens when, instead of investigating a corrupt politician or a fraudulent business practice, journalists are looking into the behavior of an algorithm?

In an ideal world, algorithmic decision-making would be better than that made by humans. If you don’t program your code to discriminate on the basis of age, gender, race, or sexuality, then you may think these factors shouldn’t be taken into account. In theory, the algorithms should make decisions based purely on the data, in a transparent way.

Reality, however, is not ideal; algorithms are designed by people and draw their datasets from a biased world. Hidden prejudices may lead to unintended consequences. Furthermore, overconfidence in algorithms’ performance, misinterpretation of statistics, and automated decision-making processes can make appealing these decisions extremely difficult.

Even when decisions are appealed, algorithms are usually incapable of explaining “why” they made a decision: careful, statistical analysis is needed to disentangle the effects of all the variables considered, and to determine whether or not that decision was unfair. This can make explaining the case to the general public—or to lawyers—very difficult.

ProPublica found the algorithm to have a racial bias—it was more often incorrectly assigning high risk scores to black defendants than white. Yet Northpointe, the company that made the software, argued it was unbiased. The higher rate of false positives for black defendants could be due to the fact that they are arrested more often by the police.

It’s illustrative of how algorithms fed on historical data can perpetuate historical biases. Hirevue’s algorithm assigns scores to candidates for jobs, records job applicants, and analyzes their verbal and non-verbal reactions to a series of questions. It then compares that score against the highest-performing employees currently at the company, as a substitute for a personality test. Critics of the system argue that this just ensures your future employees look and sound like those you’ve hired in the past.

Even when algorithms don’t appear to be making obvious decisions, they can wield an outsized influence on the world. Part of the Trump-Russia scandal involves the political ads bought on Facebook; its micro-targeting was enabled by Facebook’s algorithm. Facebook’s experiments in 2012 demonstrated that the ads could nudge people to go to the polls by altering what they saw in the newsfeed. According to Facebook, this experiment pushed between 60,000-280,000 additional voters to go to the polls; that number could easily exceed the margin of victory in a close election.

The Algorithms Beat

Nick Diakopoulos, Director of the Computational Journalism Lab at Northwestern University, is one of the researchers hoping to prevent a world where mysterious, black-box algorithms are empowered to make ever more important decisions, with no way of explaining them and no one held accountable when they go wrong.

The first type is where the algorithm is behaving unfairly, as in the Broward County case. The second category of algorithmic public-interest stories arise from errors or mistakes. Algorithms can be poorly designed; they can work from incorrect datasets; or they can fail to work in specific cases. Then, because the algorithm is perceived as infallible, errors can persist, such as graphic or disturbing videos that slip through YouTube’s content filter.

Finally, the algorithms may not be entirely to blame: humans can use or abuse algorithms in ways that weren’t intended. Take the case detailed in Cathy O’Neil’s wonderful book, Weapons of Math Destruction. A Washington teacher was fired for having a low “teacher assessment score.” The score was calculated based on whether standardized test scores for the students improved under a specific teacher. But this created a perverse incentive: teachers lied and inflated the scores their students received. Those who didn’t cheat and inflate the scores were fired. The algorithm was being abused by the teachers—but, arguably, it should never have been used as the main factor in deciding who got bonuses and who got fired.

Finding the Story

So how can journalists hope to find stories in this new era? One way is to obtain raw code for an audit. If the code is used by the government, such as in the 250+ algorithms tracked by the website Algorithm Tips, freedom of information requests may allow journalists to access the code.

If the bad behavior arises from a simple coding error, an expert may be able to reveal it, but issues with algorithms tend to be far more complicated. If even the people who coded the system can’t predict or interpret its behavior, it will be difficult for outsiders to infer a personality from a page of Python.

“Reverse-engineering” the algorithm—monitoring how it behaves, and occasionally prodding it with a well-chosen input—might be more successful.

AlgorithmWatch in Germany gathers data from customers to see how they are affected by advertising and newsfeed algorithms; WhoTargetsMe is a browser plugin that collects information about political advertising and tells them who’s trying to influence their vote. By crowdsourcing data from a wide range of people, its behavior in the field can be analyzed.

Investigative journalists, posing as various people, can attempt to use the algorithms to expose how they behave—along with their vulnerabilities. VICE News recently used this to demonstrate that anyone could pose as a US Senator for the purposes of Facebook’s “Paid for by…” feature, which was intended to make political ads transparent.

Who’s Responsible?

Big tech companies derive much of their market value from the algorithms they’ve designed and the data they’ve gathered—they are unlikely to share them with prying journalists or regulators.

Yet without access to the data and the teams of analysts these companies can deploy, it’s hard to get a handle on what’s happening and who’s responsible. Algorithms are not static: Google’s algorithms change 600 times a year. They are dynamic systems that respond to changing conditions in the environment, and therefore their behavior might not be consistent.

Finally, linking the story back to a responsible person can be tough, especially when the organizational structure is as opaque as the algorithms themselves.

As difficult as these stories may be to discover and relate accurately, journalists, politicians, and citizens must start adapting to a world where algorithms increasingly call the shots. There’s no turning back. Humans cannot possibly analyze the sheer volume of data that companies and governments will hope to leverage to their advantage.

As algorithms become ever more pervasive and influential—shaping whole nations and societies—holding them accountable will be just as important as holding politicians responsible. The institutions and tools to do this must be developed now—or we will all have to live with the consequences.

]]>126007Sci-Fi Movies Are the Secret Weapon That Could Help Silicon Valley Grow Uphttps://singularityhub.com/2018/11/17/sci-fi-movies-are-the-secret-weapon-that-could-help-silicon-valley-grow-up/
Sat, 17 Nov 2018 15:00:09 +0000https://singularityhub.com/?p=126186If there’s one line that stands the test of time in Steven Spielberg’s 1993 classic Jurassic Park, it’s probably Jeff Goldblum’s exclamation, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

Despite growing concerns that powerful emerging technologies could lead to unexpected and wide-ranging consequences, innovators are struggling with how to develop beneficial new products while being socially responsible. Part of the answer could lie in watching more science fiction movies like Jurassic Park.

Hollywood Lessons in Societal Risks

I’ve long been interested in how innovators and others can better understand the increasingly complex landscape around the social risks and benefits associated with emerging technologies. Growing concerns over the impacts of tech on jobs, privacy, security and even the ability of people to live their lives without undue interference highlight the need for new thinking around how to innovate responsibly.

New ideas require creativity and imagination, and a willingness to see the world differently. And this is where science fiction movies can help.

Sci-fi flicks are, of course, notoriously unreliable when it comes to accurately depicting science and technology. But because their plots are often driven by the intertwined relationships between people and technology, they can be remarkably insightful in revealing social factors that affect successful and responsible innovation.

This is clearly seen in Jurassic Park. The movie provides a surprisingly good starting point for thinking about the pros and cons of modern-day genetic engineering and the growing interest in bringing extinct species back from the dead. But it also opens up conversations around the nature of complex systems that involve both people and technology, and the potential dangers of “permissionless” innovation that’s driven by power, wealth and a lack of accountability.

As with Jurassic Park, Ex Machina centers around a wealthy and unaccountable entrepreneur who is supremely confident in his own abilities. In this case, the technology in question is artificial intelligence.

The movie tells a tale of an egotistical genius who creates a remarkable intelligent machine—but he lacks the awareness to recognize his limitations and the risks of what he’s doing. It also provides a chilling insight into potential dangers of creating machines that know us better than we know ourselves, while not being bound by human norms or values.

The result is a sobering reminder of how, without humility and a good dose of humanity, our innovations can come back to bite us.

The technologies in Jurassic Park, Minority Report, and Ex Machina lie beyond what is currently possible. Yet these films are often close enough to emerging trends that they help reveal the dangers of irresponsible, or simply naive, innovation. This is where these and other science fiction movies can help innovators better understand the social challenges they face and how to navigate them.

Real-World Problems Worked Out On-Screen

In a recent op-ed in the New York Times, journalist Kara Swisher asked, “Who will teach Silicon Valley to be ethical?” Prompted by a growing litany of socially questionable decisions amongst tech companies, Swisher suggests that many of them need to grow up and get serious about ethics. But ethics alone are rarely enough. It’s easy for good intentions to get swamped by fiscal pressures and mired in social realities.

Elon Musk has shown that brilliant tech innovators can take ethical missteps along the way. Image Credit:AP Photo/Chris Carlson

Technology companies increasingly need to find some way to break from business as usual if they are to become more responsible. High-profile cases involving companies like Facebook and Uber as well as Tesla’s Elon Musk have highlighted the social as well as the business dangers of operating without fully understanding the consequences of people-oriented actions.

Many more companies are struggling to create socially beneficial technologies and discovering that, without the necessary insights and tools, they risk blundering about in the dark.

For instance, earlier this year, researchers from Google and DeepMind published details of an artificial intelligence-enabled system that can lip-read far better than people. According to the paper’s authors, the technology has enormous potential to improve the lives of people who have trouble speaking aloud. Yet it doesn’t take much to imagine how this same technology could threaten the privacy and security of millions—especially when coupled with long-range surveillance cameras.

Developing technologies like this in socially responsible ways requires more than good intentions or simply establishing an ethics board. People need a sophisticated understanding of the often complex dynamic between technology and society. And while, as Mozilla’s Mitchell Baker suggests, scientists and technologists engaging with the humanities can be helpful, it’s not enough.

Here is where science fiction movies become a powerful tool for guiding innovators, technology leaders and the companies where they work. Their fictional scenarios can reveal potential pitfalls and opportunities that can help steer real-world decisions toward socially beneficial and responsible outcomes, while avoiding unnecessary risks.

And science fiction movies bring people together. By their very nature, these films are social and educational levelers. Look at who’s watching and discussing the latest sci-fi blockbuster, and you’ll often find a diverse cross-section of society. The genre can help build bridges between people who know how science and technology work, and those who know what’s needed to ensure they work for the good of society.

This is the underlying theme in my new book Films from the Future: The Technology and Morality of Sci-Fi Movies. It’s written for anyone who’s curious about emerging trends in technology innovation and how they might potentially affect society. But it’s also written for innovators who want to do the right thing and just don’t know where to start.

Of course, science fiction films alone aren’t enough to ensure socially responsible innovation. But they can help reveal some profound societal challenges facing technology innovators and possible ways to navigate them. And what better way to learn how to innovate responsibly than to invite some friends round, open the popcorn and put on a movie?

It certainly beats being blindsided by risks that, with hindsight, could have been avoided.

]]>126186The Spatial Web Will Map Our 3D World—And Change Everything In the Processhttps://singularityhub.com/2018/11/16/the-spatial-web-will-map-our-3d-world-and-change-everything-about-it-in-the-process/
Fri, 16 Nov 2018 15:00:39 +0000https://singularityhub.com/?p=126145The boundaries between digital and physical space are disappearing at a breakneck pace. What was once static and boring is becoming dynamic and magical.

For all of human history, looking at the world through our eyes was the same experience for everyone. Beyond the bounds of an over-active imagination, what you see is the same as what I see.

But all of this is about to change. Over the next two to five years, the world around us is about to light up with layer upon layer of rich, fun, meaningful, engaging, and dynamic data. Data you can see and interact with.

This magical future ahead is called the Spatial Web and will transform every aspect of our lives, from retail and advertising, to work and education, to entertainment and social interaction.

Massive change is underway as a result of a series of converging technologies, from 5G global networks and ubiquitous artificial intelligence, to 30+ billion connected devices (known as the IoT), each of which will generate scores of real-world data every second, everywhere.

The current AI explosion will make everything smart, autonomous, and self-programming. Blockchain and cloud-enabled services will support a secure data layer, putting data back in the hands of users and allowing us to build complex rule-based infrastructure in tomorrow’s virtual worlds.

And with the rise of online-merge-offline (OMO) environments, two-dimensional screens will no longer serve as our exclusive portal to the web. Instead, virtual and augmented reality eyewear will allow us to interface with a digitally-mapped world, richly layered with visual data.

Welcome to the Spatial Web. Over the next few months, I’ll be doing a deep dive into the Spatial Web (a.k.a. Web 3.0), covering what it is, how it works, and its vast implications across industries, from real estate and healthcare to entertainment and the future of work. In this blog, I’ll discuss the what, how, and why of Web 3.0—humanity’s first major foray into our virtual-physical hybrid selves (BTW, this year at Abundance360, we’ll be doing a deep dive into the Spatial Web with the leaders of HTC, Magic Leap, and High-Fidelity).

Let’s dive in.

What is the Spatial Web?

While we humans exist in three dimensions, our web today is flat.

The web was designed for shared information, absorbed through a flat screen. But as proliferating sensors, ubiquitous AI, and interconnected networks blur the lines between our physical and online worlds, we need a spatial web to help us digitally map a three-dimensional world.

To put Web 3.0 in context, let’s take a trip down memory lane. In the late 1980s, the newly-birthed world wide web consisted of static web pages and one-way information—a monumental system of publishing and linking information unlike any unified data system before it. To connect, we had to dial up through unstable modems and struggle through insufferably slow connection speeds.

But emerging from this revolutionary (albeit non-interactive) infodump, Web 2.0 has connected the planet more in one decade than empires did in millennia.

We’ve seen the explosion of social networking sites, wikis, and online collaboration platforms. Consumers have become creators; physically isolated users have been handed a global microphone; and entrepreneurs can now access billions of potential customers.

But if Web 2.0 took the world by storm, the Spatial Web emerging today will leave it in the dust.

While there’s no clear consensus about its definition, the Spatial Web refers to a computing environment that exists in three-dimensional space—a twinning of real and virtual realities—enabled via billions of connected devices and accessed through the interfaces of virtual and augmented reality.

In this way, the Spatial Web will enable us to both build a twin of our physical reality in the virtual realm and bring the digital into our real environments.

It’s the next era of web-like technologies:

Spatial computing technologies, like augmented and virtual reality;

Physical computing technologies, like IoT and robotic sensors;

And decentralized computing: both blockchain—which enables greater security and data authentication—and edge computing, which pushes computing power to where it’s most needed, speeding everything up.

Geared with natural language search, data mining, machine learning, and AI recommendation agents, the Spatial Web is a growing expanse of services and information, navigable with the use of ever-more-sophisticated AI assistants and revolutionary new interfaces.

Where Web 1.0 consisted of static documents and read-only data, Web 2.0 introduced multimedia content, interactive web applications, and social media on two-dimensional screens. But converging technologies are quickly transcending the laptop, and will even disrupt the smartphone in the next decade.

With the rise of wearables, smart glasses, AR / VR interfaces, and the IoT, the Spatial Web will integrate seamlessly into our physical environment, overlaying every conversation, every road, every object, conference room, and classroom with intuitively-presented data and AI-aided interaction.

Think: the Oasis in Ready Player One, where anyone can create digital personas, build and invest in smart assets, do business, complete effortless peer-to-peer transactions, and collect real estate in a virtual world.

Or imagine a virtual replica or “digital twin” of your office, each conference room authenticated on the blockchain, requiring a cryptographic key for entry.

As I’ve discussed with my good friend and “VR guru” Philip Rosedale, I’m absolutely clear that in the not-too-distant future, every physical element of every building in the world is going to be fully digitized, existing as a virtual incarnation or even as N number of these. “Meet me at the top of the Empire State Building?” “Sure, which one?”

This digitization of life means that suddenly every piece of information can become spatial, every environment can be smarter by virtue of AI, and every data point about me and my assets—both virtual and physical—can be reliably stored, secured, enhanced, and monetized.

In essence, the Spatial Web lets us interface with digitally-enhanced versions of our physical environment and build out entirely fictional virtual worlds—capable of running simulations, supporting entire economies, and even birthing new political systems.

But while I’ll get into the weeds of different use cases next week, let’s first concretize.

How Does It Work?

Let’s start with the stack. In the PC days, we had a database accompanied by a program that could ingest that data and present it to us as digestible information on a screen.

Then, in the early days of the web, data migrated to servers. Information was fed through a website, with which you would interface via a browser—whether Mosaic or Mozilla.

And then came the cloud.

Resident at either the edge of the cloud or on your phone, today’s rapidly proliferating apps now allow us to interact with previously read-only data, interfacing through a smartphone. But as Siri and Alexa have brought us verbal interfaces, AI-geared phone cameras can now determine your identity, and sensors are beginning to read our gestures.

And now we’re not only looking at our screens but through them, as the convergence of AI and AR begins to digitally populate our physical worlds.

While Pokémon Go sent millions of mobile game-players on virtual treasure hunts, IKEA is just one of the many companies letting you map virtual furniture within your physical home—simulating everything from cabinets to entire kitchens. No longer the one-sided recipients, we’re beginning to see through sensors, creatively inserting digital content in our everyday environments.

Let’s take a look at how the latest incarnation might work. In this new Web 3.0 stack, my personal AI would act as an intermediary, accessing public or privately-authorized data through the blockchain on my behalf, and then feed it through an interface layer composed of everything from my VR headset, to numerous wearables, to my smart environment (IoT-connected devices or even in-home robots).

But as we attempt to build a smart world with smart infrastructure, smart supply chains and smart everything else, we need a set of basic standards with addresses for people, places, and things. Just like our web today relies on the Internet Protocol (TCP/IP) and other infrastructure, by which your computer is addressed and data packets are transferred, we need infrastructure for the Spatial Web.

And a select group of players is already stepping in to fill this void. Proposing new structural designs for Web 3.0, some are attempting to evolve today’s web model from text-based web pages in 2D to three-dimensional AR and VR web experiences located in both digitally-mapped physical worlds and newly-created virtual ones.

With a spatial programming language analogous to HTML, imagine building a linkable address for any physical or virtual space, granting it a format that then makes it interchangeable and interoperable with all other spaces.

But it doesn’t stop there.

As soon as we populate a virtual room with content, we then need to encode who sees it, who can buy it, who can move it…

And the Spatial Web’s eventual governing system (for posting content on a centralized grid) would allow us to address everything from the room you’re sitting in, to the chair on the other side of the table, to the building across the street.

Just as we have a DNS for the web and the purchasing of web domains, once we give addresses to spaces (akin to granting URLs), we then have the ability to identify and visit addressable locations, physical objects, individuals, or pieces of digital content in cyberspace.

And these not only apply to virtual worlds, but to the real world itself. As new mapping technologies emerge, we can now map rooms, objects, and large-scale environments into virtual space with increasing accuracy.

We might then dictate who gets to move your coffee mug in a virtual conference room, or when a team gets to use the room itself. Rules and permissions would be set in the grid, decentralized governance systems, or in the application layer.

Taken one step further, imagine then monetizing smart spaces and smart assets. If you have booked the virtual conference room, perhaps you’ll let me pay you 0.25 BTC to let me use it instead?

But given the Spatial Web’s enormous technological complexity, what’s allowing it to emerge now?

Why Is It Happening Now?

While countless entrepreneurs have already started harnessing blockchain technologies to build decentralized apps (or dApps), two major developments are allowing today’s birth of Web 3.0:

High-resolution wireless VR/AR headsets are finally catapulting virtual and augmented reality out of a prolonged winter.

The International Data Corporation (IDC) predicts the VR and AR headset market will reach 65.9 million units by 2022. Already in the next 18 months, 2 billion devices will be enabled with AR. And tech giants across the board have long begun investing heavy sums.

In early 2019, HTC is releasing the VIVE Focus, a wireless self-contained VR headset. At the same time, Facebook is charging ahead with its Project Santa Cruz—the Oculus division’s next-generation standalone, wireless VR headset. And Magic Leap has finally rolled out its long-awaited Magic Leap One mixed reality headset.

Mass deployment of 5G will drive 10 to 100-gigabit connection speeds in the next 6 years, matching hardware progress with the needed speed to create virtual worlds.

And with such democratizing speeds, every user will be able to develop in VR.

But accompanying these two catalysts is also an important shift towards the decentralized web and a demand for user-controlled data.

Converging technologies, from immutable ledgers and blockchain to machine learning, are now enabling the more direct, decentralized use of web applications and creation of user content. With no central point of control, middlemen are removed from the equation and anyone can create an address, independently interacting with the network.

Enabled by a permission-less blockchain, any user—regardless of birthplace, gender, ethnicity, wealth, or citizenship—would thus be able to establish digital assets and transfer them seamlessly, granting us a more democratized Internet.

And with data stored on distributed nodes, this also means no single point of failure. One could have multiple backups, accessible only with digital authorization, leaving users immune to any single server failure.

Implications Abound–What’s Next…

With a newly-built stack and an interface built from numerous converging technologies, the Spatial Web will transform every facet of our everyday lives—from the way we organize and access our data, to our social and business interactions, to the way we train employees and educate our children.

We’re about to start spending more time in the virtual world than ever before. Beyond entertainment or gameplay, our livelihoods, work, and even personal decisions are already becoming mediated by a web electrified with AI and newly-emerging interfaces.

In our next blog on the Spatial Web, I’ll do a deep dive into the myriad industry implications of Web 3.0, offering tangible use cases across sectors.

Join Me

Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘on ramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

]]>126145How Quantum Computing is Enabling Breakthroughs in Chemistryhttps://singularityhub.com/2018/11/15/how-quantum-computing-is-enabling-breakthroughs-in-chemistry/
Thu, 15 Nov 2018 15:30:23 +0000https://singularityhub.com/?p=126024Note: Mark Jackson is Scientific Lead of Business Development at Cambridge Quantum Computing

Quantum computing is expected to solve computational questions that cannot be addressed by existing classical computing methods. It is now accepted that the very first discipline that will be greatly advanced by quantum computers is quantum chemistry.

Quantum Computers

In 1982, the Nobel Prize-winning physicist Richard Feynman observed that simulating and then analyzing molecules was so difficult for a digital computer as to make it impossible for any practical use. The problem was not that the equations governing such simulations were difficult.

In fact, they were comparatively straightforward, and had already been known for decades. The problem was that most molecules of interest contained hundreds of electrons, and each of these electrons interacted with every other electron in a quantum mechanical fashion—resulting in millions of interactions that even powerful computers could not handle.

To overcome the quantum nature of the equations, Feynman proposed quantum computers, which perform calculations based on the laws of quantum physics, as the ultimate answer. Unfortunately, such precise manipulation of individual quantum objects was far from technically possible. The joke for the past 35 years has been that quantum computing is always ten years away.

In the past few years, what was once a distant dream has slowly become a reality. Not only do quantum computers now exist, millions of programs have been executed via the cloud, and useful applications have started to emerge.

The power of a quantum computer can be roughly estimated by the number of qubits, or quantum bits: each qubit can represent a 1 and 0 state simultaneously. There are a number of promising hardware approaches to quantum computing, including superconducting, ion trap, and topological. Each has advantages and disadvantages, but superconducting has taken an early lead in terms of scalability. Google, IBM, and Intel have each used this approach to fabricate quantum processors ranging from 49 to 72 qubits. Qubit quality has also improved.

The reason such modeling is significant is that “classical” digital computers find it virtually impossible to tackle multi-reference states; in many cases, classical computing methods fail not only quantitatively but also qualitatively in the description of the electronic structure of the molecules.

An outstanding problem—and the one recently solved—is to find ways that a quantum computer can run calculations efficiently and with the required chemical accuracy to make a difference in the real world. The program was run on IBM’s 20 qubit processor, as both CQC and JSR are members of the IBM Q Network.

Why is chemistry of such interest? Chemistry is one of the first commercially lucrative applications for a variety of reasons. Researchers hope to discover more energy-efficient materials to be used in batteries or solar panels. There are also environmental benefits: about two percent of the world’s energy supply goes toward fertilizer production, which is known to be grossly inefficient and could be improved by sophisticated chemical analysis.

Finally, there are applications in personalized medicine, with the possibility of predicting how pharmaceuticals would affect individuals based on their genetic makeup. The long-term vision is the ability to design a drug for a particular individual to maximize treatment and minimize side effects.

There were two strategies employed by CQC and JSR Corp that allowed the researchers to make this advance. First, they used CQC’s proprietary compiler to most efficiently convert the computer program into instructions for qubit manipulation. Such efficiency is particularly essential on today’s low-qubit machines, in which every qubit is needed and speed of execution is critical.

Second, they utilized quantum machine learning, a special sub-field of machine learning that uses vector-like amplitudes rather than mere probabilities. The method of quantum machine learning being used is specially designed for low-qubit quantum computers, offloading some of the calculations to conventional processors.

The next few years will see a dramatic advance in both quantum hardware and software. As calculations become more refined, more industries will be able to take advantage of applications including quantum chemistry. The Gartner Report states that within 4 years, 20 percent of corporations will have a budget for quantum computing. Within ten years, it should be an integral component of technology.

]]>126024Why Scientists Are Rushing to Catalog the World’s Poophttps://singularityhub.com/2018/11/15/why-scientists-are-rushing-to-catalog-the-worlds-poop/
Thu, 15 Nov 2018 15:00:53 +0000https://singularityhub.com/?p=126140If a group of scientists is successful, the Svalbard Global Seed Vault will be getting a cousin—one that may initially sound rather strange. Instead of gathering seeds to preserve plant species, this project involves gathering fecal samples from people all over the globe.

The effort is known as the Global Microbiome Conservancy (GMC), and its goal is to catalog and safe-keep the different kinds of gut bacteria found in humans’ digestive systems across the planet. It’s an endeavor that could be under threat from changing diets and lifestyles.

Healthy Mysteries

Each of us is a generous host to an almost uncountable number of microorganisms, including bacteria, fungi, and viruses, collectively known as the microbiome. The microorganisms play a central part in areas like our immune system and metabolism. For example, the state of your gut bacteria seems to play a vital role in relation to allergies, diabetes, and some forms of cancer, as well as how well you respond to certain types of medicine. There also seems to be a link between gut bacteria and psychological states like anxiety and depression.

Scientists believe that altering the composition of gut bacteria in the right way can lead to a range of health benefits. Eric Alm, an MIT microbiologist and one of the founders of GMC, believes there are many more potential treatments linked to gut bacteria out there than we know of today.

“I’m 100 percent confident that there are relevant medical applications for hundreds of strains we’ve screened and characterized,” he told Science.

The Diverse Gut Bacteria

Alm and his collaborators have been collecting gut bacteria samples from individuals across several continents. The process itself is less than glamorous. Plastic bowls are handed out and people poop in them and hand them back. The samples are then processed, fixed, and dried for DNA sequencing and measuring of lipid content. Samples are split into small tubes and shipped back to a lab, where the different bacterial strains are isolated and then preserved in freezers.

So far, the GMC has analyzed samples from people in North America, the Arctic, and Africa. The strains cataloged have included five formerly unknown bacteria genera from North American contributors, while the strains from Africa and the Arctic have included 55 unknown genera.

GMC’s budget will support collection trips until 2021. By then the team hopes to have visited about 34 countries, covering the Arctic, Africa, Asia, Oceania, and South America. The team hopes to raise additional funds to expand their research.

A Connection to Diabetes

The results from Africa and the Arctic illustrate how indigenous people living on traditional diets tend to have more diverse gut biomes, a fact that appears to be linked to the absence of certain diseases among indigenous people.

People living in Western, more urbanized societies, whose diets tend to include more foods and higher use of antibiotics in food production, tend to have less diverse gut biomes, which can lead to health issues.

“There is a critical connection between autoimmune disorders and a decline in gut microbe diversity,” according to Ramnik Xavier, co-director of MIT’s Center for Microbiome Informatics and Therapeutics.

“These and other discoveries begin to paint a picture of the ‘missing microbiome’ and underscore the importance of identifying gut microbes that may be depleted from industrialized societies,” Xavier explained.

Diversity Under Threat

GMC’s effort to find new potential cures in gut bacteria is turning into a race with time. The rapid westernization of many traditional societies causes changes in relation to diets that could lead to the disappearance of certain kinds of gut bacteria.

“Strains that co-evolved with humans are currently disappearing,” Alm told Science.

This could have long-term effects on efforts to understand precisely how the microbiome helps fend off disease, and can be a tool to improve our health.

]]>126140Designer Babies, and Their Babies: How AI and Genomics Will Impact Reproductionhttps://singularityhub.com/2018/11/14/designer-babies-and-their-babies-where-ai-and-genomics-could-take-us/
Wed, 14 Nov 2018 15:00:40 +0000https://singularityhub.com/?p=126124As if stand-alone technologies weren’t advancing fast enough, we’re in age where we must study the intersection points of these technologies. How is what’s happening in robotics influenced by what’s happening in 3D printing? What could be made possible by applying the latest advances in quantum computing to nanotechnology?

Along these lines, one crucial tech intersection is that of artificial intelligence and genomics. Each field is seeing constant progress, but Jamie Metzl believes it’s their convergence that will really push us into uncharted territory, beyond even what we’ve imagined in science fiction. “There’s going to be this push and pull, this competition between the reality of our biology with its built-in limitations and the scope of our aspirations,” he said.

Life As We Know It

Metzl explained how genomics as a field evolved slowly—and then quickly. In 1953, James Watson and Francis Crick identified the double helix structure of DNA, and realized that the order of the base pairs held a treasure trove of genetic information. There was such a thing as a book of life, and we’d found it.

In 2003, when the Human Genome Project was completed (after 13 years and $2.7 billion), we learned the order of the genome’s 3 billion base pairs, and the location of specific genes on our chromosomes. Not only did a book of life exist, we figured out how to read it.

Fifteen years after that, it’s 2018 and precision gene editing in plants, animals, and humans is changing everything, and quickly pushing us into an entirely new frontier. Forget reading the book of life—we’re now learning how to write it.

“Readable, writable, and hackable, what’s clear is that human beings are recognizing that we are another form of information technology, and just like our IT has entered this exponential curve of discovery, we will have that with ourselves,” Metzl said. “And it’s intersecting with the AI revolution.”

Learning About Life Meets Machine Learning

In 2016, DeepMind’s AlphaGo program outsmarted the world’s top Go player. In 2017 AlphaGo Zero was created: unlike AlphaGo, AlphaGo Zero wasn’t trained using previous human games of Go, but was simply given the rules of Go—and in four days it defeated the AlphaGo program.

Our own biology is, of course, vastly more complex than the game of Go, and that, Metzl said, is our starting point. “The system of our own biology that we are trying to understand is massively, but very importantly not infinitely, complex,” he added.

Multiple countries already starting to produce this data. The UK’s National Health Service recently announced a plan to sequence the genomes of five million Britons over the next five years. In the US the All of Us Research Program will sequence a million Americans. China is the most aggressive in sequencing its population, with a goal of sequencing half of all newborns by 2020.

“We’re going to get these massive pools of sequenced genomic data,” Metzl said. “The real gold will come from comparing people’s sequenced genomes to their electronic health records, and ultimately their life records.” Getting people comfortable with allowing open access to their data will be another matter; Metzl mentioned that Luna DNA and others have strategies to help people get comfortable with giving consent to their private information. But this is where China’s lack of privacy protection could end up being a significant advantage.

To compare genotypes and phenotypes at scale—first millions, then hundreds of millions, then eventually billions, Metzl said—we’re going to need AI and big data analytic tools, and algorithms far beyond what we have now. These tools will let us move from precision medicine to predictive medicine, knowing precisely when and where different diseases are going to occur and shutting them down before they start.

But, Metzl said, “As we unlock the genetics of ourselves, it’s not going to be about just healthcare. It’s ultimately going to be about who and what we are as humans. It’s going to be about identity.”

Designer Babies, and Their Babies

In Metzl’s mind, the most serious application of our genomic knowledge will be in embryo selection.

Currently, in-vitro fertilization (IVF) procedures can extract around 15 eggs, fertilize them, then do pre-implantation genetic testing; right now what’s knowable is single-gene mutation diseases and simple traits like hair color and eye color. As we get to the millions and then billions of people with sequences, we’ll have information about how these genetics work, and we’re going to be able to make much more informed choices,” Metzl said.

Imagine going to a fertility clinic in 2023. You give a skin graft or a blood sample, and using in-vitro gametogenesis (IVG)—infertility be damned—your skin or blood cells are induced to become eggs or sperm, which are then combined to create embryos. The dozens or hundreds of embryos created from artificial gametes each have a few cells extracted from them, and these cells are sequenced. The sequences will tell you the likelihood of specific traits and disease states were that embryo to be implanted and taken to full term. “With really anything that has a genetic foundation, we’ll be able to predict with increasing levels of accuracy how that potential child will be realized as a human being,” Metzl said.

This, he added, could lead to some wild and frightening possibilities: if you have 1,000 eggs and you pick one based on its optimal genetic sequence, you could then mate your embryo with somebody else who has done the same thing in a different genetic line. “Your five-day-old embryo and their five-day-old embryo could have a child using the same IVG process,” Metzl said. “Then that child could have a child with another five-day-old embryo from another genetic line, and you could go on and on down the line.”

Sounds insane, right? But wait, there’s more: as Jason Pontin reported earlier this year in Wired, “Gene-editing technologies such as Crispr-Cas9 would make it relatively easy to repair, add, or remove genes during the IVG process, eliminating diseases or conferring advantages that would ripple through a child’s genome. This all may sound like science fiction, but to those following the research, the combination of IVG and gene editing appears highly likely, if not inevitable.”

From Crazy to Commonplace?

It’s a slippery slope from gene editing and embryo-mating to a dystopian race to build the most perfect humans possible. If somebody’s investing so much time and energy in selecting their embryo, Metzl asked, how will they think about the mating choices of their children? IVG could quickly leave the realm of healthcare and enter that of evolution.

“We all need to be part of an inclusive, integrated, global dialogue on the future of our species,” Metzl said. “Healthcare professionals are essential nodes in this.” Not least among this dialogue should be the question of access to tech like IVG; are there steps we can take to keep it from becoming a tool for a wealthy minority, and thereby perpetuating inequality and further polarizing societies?

As Pontin points out, at its inception 40 years ago IVF also sparked fear, confusion, and resistance—and now it’s as normal and common as could be, with millions of healthy babies conceived using the technology.

The disruption that genomics, AI, and IVG will bring to reproduction could follow a similar story cycle—if we’re smart about it. As Metzl put it, “This must be regulated, because it is life.”

]]>126124Ears Grown From Apples? The Promise of Plants for Engineering Human Tissuehttps://singularityhub.com/2018/11/13/an-ear-grown-from-apples-why-the-key-to-tissue-engineering-could-be-plants/
Tue, 13 Nov 2018 15:00:11 +0000https://singularityhub.com/?p=126113Inspiration for game-changing science can seemingly come from anywhere. A moldy bacterial plate gave us the first antibiotic, penicillin. Zapping yeast with a platinum electrode led to a powerful chemotherapy drug, cisplatin.

For Dr. Andrew Pelling at the University of Ottawa, his radical idea came from a sci-fi cult classic called The Little Shop of Horrors. Specifically, he was intrigued by the movie’s main antagonist, a man-eating plant called Aubrey 2.

What you have here is a plant-like creature with mammalian features, said Pelling at the Exponential Medicine conference in San Diego last week. “So we started wondering: can we grow this in the lab?”

The Rise of Mechanobiology

Growing a human ear out of apples may seem irrational, but Pelling’s key insight is that an apple’s fibrous interior is strikingly similar to the microenvironments usually used in labs to bio-engineer human tissue.

To fabricate a replacement ear, for example, scientists normally carve or 3D print hollow support structures out of expensive bio-compatible materials. They then seed human stem cells into the structure, and painstaking supply a cocktail of growth factors and nutrients to urge the cells to grow. Eventually, after weeks and months of incubation, the cells spread and differentiate into skin-like cells on the scaffold. The result is a bio-engineered replacement ear.

The problem? The extremely high bar to entry: stem cells, growth factors, and materials for the scaffold are all difficult and expensive to procure.

But are those key components really necessary?

“We often think about biology through the lenses of the genome or biochemisty,” said Pelling. But cells and tissue are living components—they stretch, compress, and shear, producing mechanical forces that act upon each other.

In a series of experiments, Pelling and others found that these mechanical forces aren’t just a side product of biology; rather, they seem to crucially regulate the underlying molecular machinery of the cell.

An early study found that every stage of the growth of embryos—a “fundamental process in biology”—can be regulated and controlled by mechanical information. In other words, physical forces can drive cells to divide and migrate through tissues as our genetic code guides the formation of an entire body.

In the lab, stretching and mechanically stimulating the cells seems to fundamentally change their behaviors, too. In one assay, Pelling’s team peppered cancerous cells onto a sheet of skin cells grown on the bottom of a Petri dish. The cancer cells huddled together into little balls, forming a distinct barrier between the microtumor and the skin cells.

But when the team put the entire cellular system into a device that minutely stretches it—mimicking the body’s breathing and movement—the tumor cells became aggressive, tunneling into the layer of skin cells.

“There’s no gene modification…or biochemistry going on here. This is a purely mechanical influence,” said Pelling. “There’s a fundamental link between these things.”

Even cooler: active movement isn’t necessary for mechanical forces to transform the way cells behave. The shape of their microenvironment is enough to direct their actions.

For example, when Pelling put two cell types into a physical structure with grooves, the cells self-segregated within hours, with one type growing in the troughs and the other on the higher ledges. By simply sensing the shape of that grooved surface they “learned” to separate and spatially pattern over long ranges.

The takeaway: using shape alone, it’s possible to stimulate cells to form complex three-dimensional patterns.

Here’s where the apple comes in.

Apple of My…Ear?

Under the microscope, the microenvironment of an apple is on the same length scale as engineered surfaces for fabricating replacement tissues. That discovery got the team to wonder: is it possible to exploit that surface pattern of plants to grow human organs?

To test it out, they took an apple and washed away all its plant cells, DNA, and other biomolecules. This left them with a fibrous scaffold—the stuff that usually gets stuck in your teeth. When the team stuck human and animal cells inside, the cells began to grow and spread.

Encouraged, the team then hand-carved an apple into the shape of a human ear and repeated the process above. Within weeks the cells infiltrated, turning the chunk of apple into a fleshy human ear.

Of course, having the right shape isn’t enough. The replacement tissue also has to survive inside the body.

The team next implanted an apple-based scaffold directly under the skin of a mouse. In just eight weeks, not only had the mouse’s healthy cells invaded the matrix, the rodent’s body also produced new collagen and blood vessels that helped keep the scaffold living and healthy.

That ticks three important aspects for an engineered tissue: it’s safe, it’s biocompatible, and it comes from a sustainable, ethical source.

“This thing is becoming a living part of the body and it used to be an apple, and we did this by going to the grocery store,” said Pelling.

Moving Into the Clinical Space

Pelling is especially excited by his finding because of its simplicity: it doesn’t require stem cells or exotic growth factors to work. The elegant approach exploits the physical structure of the plant.

The team is now broadening its work to three main areas of tissue engineering: soft tissue cartilage, bone, and spinal cord and nerve repair. The key is to match the specific microstructure of a plant to that of the tissue, Pelling explained.

“It’s really exciting to see these kinds of wild ideas translate this way,” he said.

And why restrict ourselves to the body parts nature gave us? If the shape of a scaffold is the sole determinant of engineering a tissue or organ, why not design our own?

Pelling took the idea and ran with it, commissioning a design company to sketch out the scaffold for three different types of ears: an average human ear, a pointy Spock-shaped one, and a wavy one designed to suppress or enhance different frequencies to—in theory—augment hearing.

“The point I want to emphasize is…the strength of blue-sky thinking is actually coupling it to the rigor of the scientific method,” Pelling concluded. Ultimately this is how we’ll create more dinventions and solve problems.

]]>126113Breaking Out of the Corporate Bubble With Uncommon Partnershttps://singularityhub.com/2018/11/12/breaking-out-of-the-corporate-bubble-with-uncommon-partners/
Mon, 12 Nov 2018 16:30:41 +0000https://singularityhub.com/?p=125752For big companies, success is a blessing and a curse. You don’t get big without doing something (or many things) very right. It might start with an invention or service the world didn’t know it needed. Your product takes off, and growth brings a whole new set of logistical challenges. Delivering consistent quality, hiring the right team, establishing a strong culture, tapping into new markets, satisfying shareholders. The list goes on.

Eventually, however, what made you successful also makes you resistant to change.

You’ve built a machine for one purpose, and it’s running smoothly, but what about retooling that machine to make something new? Not so easy. Leaders of big companies know there is no future for their organizations without change. And yet, they struggle to drive it.

The book focuses on practical tools that have worked in big companies to break down behavioral and cognitive biases, envision radical futures, and run experiments. These include using science fiction and narrative to see ahead and adopting better measures of success for new endeavors.

A thread throughout is how to envision a new future and move into that future.

We’re limited by the bubbles in which we spend the most time—the corporate bubble, the startup bubble, the nonprofit bubble. The mutually beneficial convergence of complementary bubbles, then, can be a powerful tool for kickstarting transformation. The views and experiences of one partner can challenge the accepted wisdom of the other; resources can flow into newly co-created visions and projects; and connections can be made that wouldn’t otherwise exist.

The authors call such alliances uncommon partners. In the following excerpt from the book, Made In Space, a startup building 3D printers for space, helps Lowe’s explore an in-store 3D printing system, and Lowe’s helps Made In Space expand its vision and focus.

Uncommon Partners

In a dingy conference room at NASA, five prototypical nerds, smelling of Thai food, laid out the path to printing satellites in space and buildings on distant planets. At the end of their four-day marathon, they emerged with an artifact trail that began with early prototypes for the first 3D printer on the International Space Station and ended in the additive-manufacturing future—a future much bigger than 3D printing.

In the additive-manufacturing future, we will view everything as transient, or capable of being repurposed into new things. Rather than throwing away a soda bottle or a bent nail, we will simply reprocess these things into a new hinge for the fence we are building or a light switch plate for the tool shed. Indeed, we might not even go buy bricks for the tool shed, but instead might print them from impurities pulled from the air and the dirt beneath our feet. Such a process would both capture carbon in the air to make the bricks and avoid all the carbon involved in making and then transporting traditional bricks to your house.

If it all sounds a little too science fiction, think again. Lowe’s has already been honored as a Champion of Change by the US government for its prototype system to recycle plastic (e.g., plastic bags and bottles). The future may be closer than you have imagined. But to get there, Lowe’s didn’t work alone. It had to work with uncommon partners to create the future.

Uncommon partners are the types of organizations you might not normally work with, but which can greatly help you create radical new futures. Increasingly, as new technologies emerge and old industries converge, companies are finding that working independently to create all the necessary capabilities to enter new industries or create new technologies is costly, risky, and even counterproductive. Instead, organizations are finding that they need to collaborate with uncommon partners as an ecosystem to cocreate the future together. Nathan [Furr] and his colleague at INSEAD, Andrew Shipilov, call this arrangement an adaptive ecosystem strategy and described how companies such as Lowe’s, Samsung, Mastercard, and others are learning to work differently with partners and to work with different kinds of partners to more effectively discover new opportunities. For Lowe’s, an adaptive ecosystem strategy working with uncommon partners forms the foundation of capturing new opportunities and transforming the company. Despite its increased agility, Lowe’s can’t be (and shouldn’t become) an independent additive-manufacturing, robotics-using, exosuit-building, AR-promoting, fill-in-the-blank-what’s-next-ing company in addition to being a home improvement company. Instead, Lowe’s applies an adaptive ecosystem strategy to find the uncommon partners with which it can collaborate in new territory.

To apply the adaptive ecosystem strategy with uncommon partners, start by identifying the technical or operational components required for a particular focus area (e.g., exosuits) and then sort these components into three groups. First, there are the components that are emerging organically without any assistance from the orchestrator—the leader who tries to bring together the adaptive ecosystem. Second, there are the elements that might emerge, with encouragement and support. Third are the elements that won’t happen unless you do something about it. In an adaptive ecosystem strategy, you can create regular partnerships for the first two elements—those already emerging or that might emerge—if needed. But you have to create the elements in the final category (those that won’t emerge) either with an uncommon partner or by yourself.

For example, when Lowe’s wanted to explore the additive-manufacturing space, it began a search for an uncommon partner to provide the missing but needed capabilities. Unfortunately, initial discussions with major 3D printing companies proved disappointing. The major manufacturers kept trying to sell Lowe’s 3D printers. But the vision our group had created with science fiction was not for vendors to sell Lowe’s a printer, but for partners to help the company build a system—something that would allow customers to scan, manipulate, print, and eventually recycle additive-manufacturing objects. Every time we discussed 3D printing systems with these major companies, they responded that they could do it and then tried to sell printers. When Carin Watson, one of the leading lights at Singularity University, introduced us to Made In Space (a company being incubated in Singularity University’s futuristic accelerator), we discovered an uncommon partner that understood what it meant to cocreate a system.

Initially, Made In Space had been focused on simply getting 3D printing to work in space, where you can’t rely on gravity, you can’t send up a technician if the machine breaks, and you can’t release noxious fumes into cramped spacecraft quarters. But after the four days in the conference room going over the comic for additive manufacturing, Made In Space and Lowe’s emerged with a bigger vision. The company helped lay out an artifact trail that included not only the first printer on the International Space Station but also printing system services in Lowe’s stores.

Of course, the vision for an additive-manufacturing future didn’t end there. It also reshaped Made In Space’s trajectory, encouraging the startup, during those four days in a NASA conference room, to design a bolder future. Today, some of its bold projects include the Archinaut, a system that enables satellites to build themselves while in space, a direction that emerged partly from the science fiction narrative we created around additive manufacturing.

In summary, uncommon partners help you succeed by providing you with the capabilities you shouldn’t be building yourself, as well as with fresh insights. You also help uncommon partners succeed by creating new opportunities from which they can prosper.

Helping Uncommon Partners Prosper

Working most effectively with uncommon partners can require a shift from more familiar outsourcing or partnership relationships. When working with uncommon partners, you are trying to cocreate the future, which entails a great deal more uncertainty. Because you can’t specify outcomes precisely, agreements are typically less formal than in other types of relationships, and they operate under the provisions of shared vision and trust more than binding agreement clauses. Moreover, your goal isn’t to extract all the value from the relationship. Rather, you need to find a way to share the value.

Ideally, your uncommon partners should be transformed for the better by the work you do. For example, Lowe’s uncommon partner developing the robotics narrative was a small startup called Fellow Robots. Through their work with Lowe’s, Fellow Robots transformed from a small team focused on a narrow application of robotics (which was arguably the wrong problem) to a growing company developing a very different and valuable set of capabilities: putting cutting-edge technology on top of the old legacy systems embedded at the core of most companies. Working with Lowe’s allowed Fellow Robots to discover new opportunities, and today Fellow Robots works with retailers around the world, including BevMo! and Yamada. Ultimately, working with uncommon partners should be transformative for both of you, so focus more on creating a bigger pie than on how you are going to slice up a smaller pie.

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.

]]>125752Hacking the Mind Just Got Easier With These New Toolshttps://singularityhub.com/2018/11/12/hacking-the-mind-just-got-easier-with-these-new-tools/
Mon, 12 Nov 2018 16:00:37 +0000https://singularityhub.com/?p=126016For eons, the only way to access the three-pound mushy bio-computer between our ears was to physically crack the skull, or insert a sharp object up the nose.

Lucky for us, these examples of medical barbarism have been relegated to history. Yet the goal of reaching through the skull to modulate brain activity hasn’t changed. Within the brain, millions of neurons and their billions of connections hum with electrical activity, weaving intricate connective patterns that lead to thoughts, behaviors, and memories.

If we have the tools to read and tweak those circuits, we have the key to treating mental disorders, or even augmenting the mind.

Yet to Dr. Divya Chander at Stanford University, these technologies have two fundamental flaws that limit their transformative nature. First, most require invasive implants and open-brain surgery. Second, they’re often unwieldy and extremely expensive.

Last week at Singularity University’s Exponential Medicine conference in San Diego, technologists presented new non-invasive devices that seek to simplify and democratize brain modulation. Physically tunneling through the skull may soon be another thing of the past.

Openwater, the Wearable MRI

Being inside an MRI machine is not a pleasant experience. You’re in a tiny claustrophobic tube surrounded by a giant magnet, and instructed to lie extremely still as the machine churns away.

Nevertheless, state-of-the-art MRIs are the current gold standard for generating high-resolution images of your brain structure. Functional MRI, which tracks blood flow—a proxy for neural activity—has also been instrumental in teasing out the intricacies of brain activation in response to a changing environment. But they’re bulky and expensive; two-thirds of humanity has no access to the technology.

To Dr. Mary Lou Jepsen, CEO and founder of Openwater, the solution is simple in concept: shrink the machine down to the size of a ski hat, a bra, or a bandage, and manufacture the gadget at the cost of a smartphone. The trick, she explains, is to move away from magnets and instead turn to light.

The human body is translucent to red and near-infrared light, allowing our tissues—including both skull and brain—to be illuminated. The problem is that the light scatters as it passes through tissue, which prevents a sharp, clear image.

To re-focus the light, Jepsen turned to holograms. “Holography records the intensity of light and the phase of light waves,” she explained. Because it captures all light rays and photons at all positions and angles simultaneously, a hologram can be used to re-direct light rays into a single stream of light.

During the scan, the device first shoots focused ultrasound waves to a spot on the tissue. Next comes the red light, which slightly changes color to orange when it goes through the “sonic spot.”

Jepsen then matches this output orange light with another disc of similar orange light to form the hologram. “Holograms can only be made from two beams of light of the same color,” she explained. The resulting hologram is then recorded on a camera chip.

The result? All red light is filtered out, so that the setup only captures information about that particular sonic spot. Spot by spot, the device can image the entire brain.

Openwater is currently building a prototype, and Jepsen is particularly excited about testing it on brain diseases. Because blood absorbs red light, it’s an especially attractive target to image. Tumors often carry five times the blood levels of normal tissues, making it pop under red light; in contrast, stroke restricts blood flow, which lets blood-deprived tissue show up as a dark spot on scans.

In theory, the device could even track neural activity. Scientists have long used increased oxygenated blood flow as a proxy for neural activation. Jepsen’s device can track the same changes with light.

Eventually Jepsen hopes to supply rural places, ambulances, and urgent care centers with the device. “I think…this is inevitable,” she concluded.

A Wearable Brain-Machine Interface

Mind-controlled prostheses have come a long way, yet most still required implanted electrodes to precisely capture intentions of movement.

Back in 2012, Dr. Eric Leuthardt, a neurosurgeon at Washington University in St. Louis, began experimenting with ways to capture the brain’s movement instructions using wearables.

Specifically, he explained, “I wanted to use these neurotechnologies to connect our mind and heal our brains in the setting of stroke, focusing on patients that lost control of hand functions after the attack.”

The crux to Leuthardt’s system is a peculiar electrical fingerprint in a region of the brain called the premotor cortex. This area plans movements—either real ones or imagined ones—and the signals subsequently get sent to the motor cortex on other side of the brain and carried out.

Leuthardt found that using a cap embedded with electrodes, he could reliably pick up the low-frequency signals generated by the premotor cortex. These “planning” signals are then sent to a machine learning algorithm to parse out the intended movement. Finally, the results of the computation are used to control a prosthetic to carry out the movement.

With training, the stroke patients were able to use their minds to pick up a marble and place it into a cup—a remarkably complex operation. Eventually they could perform everyday tasks with their prosthetic hands, such as pulling up pants.

“What’s so cool about this technology is it’s not a drug, doesn’t require surgery, we’re simply using a technology to harvest the power of your own thoughts to change the wiring and structure of your brain,” said Leuthardt.

Another of Leuthardt’s innovative devices, the eQuility stimulator, is striving to disrupt negative thought patterns in psychiatric disorders such as depression.

In depression, the brain’s various circuits show an imbalance in activation. One way to potentially treat symptoms is to restore that balance. Scientists have been eyeing the vagus nerve—two spaghetti-like nerves that run along the neck and innervate the entire body—as a potential target. Previous stimulators are extremely bulky and need to be implanted under the skin, making them impractical, explained Leuthardt.

eQuility takes advantage of a branch of the vagus nerve that snakes over to the ear. By packing an electrical stimulator inside a headset, the wearable can modulate vagus nerve activity directly from the ear.

Ultimately we may be reaching towards another milestone in brain modulation: one that democratizes the technologies, allowing more people to manipulate their brain activity without first going under the knife.

“In the next 30 to 50 years we are going to see a rewriting of the fabric of the human experience,” concluded Leuthardt. “Fundamentally it’s only going to be limited by our imagination.”