Share this story

Last week, D-Wave announced a new version of its quantum annealing computer. The new machine includes a number of technical improvements, as well as a significant change to the physical arrangement of the board. What does all this mean? Combined with D-Wave's online resources, a tool that verges on useful is starting to take form.

Making a smooth computer

Before we reach the gooey chocolate center, we have to deal with the crusty outer coating: what is a quantum annealer? Most computers work in a straightforward manner: to add two numbers together, you construct a set of logical gates that will perform addition. Each of these gates performs a set of specific and clearly defined operations on its input.

But that is not the only way to perform computation. Most problems can be rewritten so that they represent an energy minimization problem. In this picture, the problem is an energy landscape, and the solution is the lowest-possible energy of that landscape. The trick is finding the combination of bit values that represents that energy.

To do this, we start with an energy landscape that is flat: we can start all the bits in the lowest energy of this flat landscape. Then we carefully and slowly modify the landscape around the bits until it represents our problem. If we have done that correctly, the bits are still in their lowest energy state. We obtain a solution by reading off the bit values.

Although this works without anything quantum being involved, D-Wave does this with quantum bits (qubits). That means the qubits are correlated with each other—this is called quantum entanglement. As a result, they change value together, rather than independently.

Tunneling

This allows something called quantum tunneling. Imagine a qubit stuck in a high energy state. Nearby, there is a lower energy state that the qubit would prefer to be in. But to get to that low energy state, it first has to go to an even higher energy state. In a classical system, this creates a barrier to reaching the lower energy state. But in a quantum system, the qubit can tunnel through the energy barrier to enter the lower energy state.

These two properties may allow a computer like the one that D-Wave operates to obtain solutions for some problems more quickly than its classical counterpart.

The devil, however, is in the details. Within the computer, an energy landscape is produced by the coupling (physical connection) among qubits. The coupling controls how strongly the value of one qubit influences the value of the rest of them.

This has always been the major sticking point of the D-Wave machine. Under ideal circumstances, every qubit would have couplers that link it directly to every other qubit. That many connections, however, is impractical.

A qubit all alone

The consequences of the lack of connectivity are severe. Some problems simply cannot be represented by D-Wave machines. Even in cases where they can, the computation can be inefficient. Imagine that a problem required qubits one and three to be connected, but they are not directly connected. In that case, you have to search for qubits that are common to both. Say qubit one is linked to qubit five, while qubit two is linked to qubits five and three. Logical qubit one is then one and five combined. Logical qubit three is qubits two and three linked together. D-Wave refers to this as a chain length of, in this case, two.

Chaining costs physical qubits, which are combined to create logical qubits, making fewer available for the computation.

D-Wave's development path has been one of engineering ever more complicated arrangements of qubits to increase the connectivity. By increasing the connectivity, the chain lengths become shorter, leaving a larger number of logical qubits. When qubits are tied together to create more connectivity, a larger number of problems can be encoded.

The efficiency of structuring some problems is going to be very, very low, meaning that the D-Wave architecture is simply not suited to those problems. But as the connectivity increases, the number of unsuitable problems goes down.

In the previous iteration of this machine, the qubits were structured in blocks of eight, such that connectivity between diagonal blocks was improved compared to two versions ago (see the animated gif). This introduced a small improvement in chain lengths.

Now D-Wave has moved on to a Pegasus graph. I don't know how to describe it, so I'm going to describe it incorrectly in the strict graph theory sense but in a way I think will make more sense overall. Instead of a single basic unit of eight qubits, there are now two basic units: a block of eight and a pair.

In the eight qubit blocks, the qubits are arranged as before, with an inner loop and an outer loop. But, as you can see below, the inner and outer loops have an extra connection. That means that each qubit has five connections within that small block.

The blocks are no longer arranged in a regular grid, either, and the interconnections between the qubits from separate blocks are much denser. Whereas the previous generation connected outer loop qubits to outer loop qubits, now each qubit is connected to both inner and outer loops of neighboring blocks.

Then, on top of that, there is a new network of long-range connections between different blocks. Each qubit has a long-range connection to another qubit in a distant block. The density of the long-range connectivity is increased by the second basic building block: connected pairs. The pairs are placed around the outside of the main block pattern to complete the long-range connectivity.

The idea, I think, is to ensure that the eight qubit groupings near the sides of the chip still have nearly the same connectivity as inner groups, unlike in the chimera graphs.

Make the chains shorter

What does all this mean? First of all, the similarity between the chimera and pegasus graphs means that code developed for chimera should still work on pegasus. The increased connectivity means the chain lengths are significantly reduced, making calculations more reliable.

To give you an idea of how much the new graph improves the situation, a square lattice with diagonal interconnects requires a chain length of six in the chimera graph and chain length of two in the pegasus implementation. In general, chain lengths are reduced by a factor of two or more. The run times are reduced by 30 to 75 percent on the new machine.

Aside from the new graph, D-Wave has improved at a technical level: the qubits have lower noise, and there is a much larger number of qubits. The plan is that the new architecture will eventually get D-Wave to 5,000 qubits (up from 2,000). Using the chimera architecture, this would be a nice (but not stellar) upgrade. Adding the changes in architecture means many more of those physical qubits can be used as independent logical qubits, making this a much more significant upgrade.

Share this story

Chris Lee
Chris writes for Ars Technica's science section. A physicist by day and science writer by night, he specializes in quantum physics and optics. He Lives and works in Eindhoven, the Netherlands. Emailchris.lee@arstechnica.com

I subscribe to Ars precisely because of articles like this one by Chris. Outside of peer-reviewed journal articles, it is hard to find this kind of detail, depth and explanation that goes beyond layman's knowledge and breaks down the how and why in a long(er) format article that makes the reader actually think. Keep 'em coming!

EDIT: that video of qbit connections looks at one point like an idea for new plaid design. The McQbit family plaid perhaps?

Most problems can be rewritten so that they represent an energy minimization problem. In this picture, the problem is an energy landscape, and the solution is the lowest-possible energy of that landscape. The trick is finding the combination of bit values that represents that energy.

How exactly is this rewriting done? And what problems fall under "most problems"? Are there classes of problems for which this rewriting simply cannot be done?

Most problems can be rewritten so that they represent an energy minimization problem. In this picture, the problem is an energy landscape, and the solution is the lowest-possible energy of that landscape. The trick is finding the combination of bit values that represents that energy.

How exactly is this rewriting done? And what problems fall under "most problems"? Are there classes of problems for which this rewriting simply cannot be done?

There is no single way of doing the rewriting. But most problems means "almost all". Suppose your problem is to find the closest match between a target word and a long list of words. Well, to do that you can compare each character of the target word with the corresponding character of each list item. Count the number of differences for each item. Then, find the item with the lowest score. That's your closest match.

This is a super simplified example. For real problems it's a lot more complex, but the principle is similar. For example in the Traveling Salesman problem (which is NP-complete), we want to minimize the time or expense of a salesman's travel route. But every other NP complete problem can be rewritten to match the form of the traveling salesman problem (we say they are mapping reducible to the salesman problem). Therefore every NP complete problem can be written as a minimization problem. It's difficult for me to think of any problems easier than NP complete that can't be written as a minimization problem.

Note that quantum computers cannot solve NP complete problems in a reasonable time. There will always be many problems out of the reach of DWave's system, and future true quantum computers. But there are very good approximation approaches for NP complete problems. Some of them are expressed as minimization problems.

Always interested in the developments in this space. Article describes what sounds like far more development is taking place pushing boundaries than I'd anticipated. Looking forward to reports on wild success stories applying these systems to areas traditional computers hit the wall.

Most problems can be rewritten so that they represent an energy minimization problem. In this picture, the problem is an energy landscape, and the solution is the lowest-possible energy of that landscape. The trick is finding the combination of bit values that represents that energy.

How exactly is this rewriting done? And what problems fall under "most problems"? Are there classes of problems for which this rewriting simply cannot be done?

You turn the problem into an program (or equation, which is a non-branching program) with multiple inputs and a single output for which you want to minimize the output value.

You can do this with a little creativity for a lot of problems. Even something as complex as voice recognition could be reduced to 'find the minimum distance between the word I heard and all other words' and then another one that tries to minimize the 'nonsense factor' of the sentence made of the words you think you heard. At first glance this may seem too complex to actually do, but when you train a deep neural network you are essentially minimizing the error output, so anything a DNN can solve is a type of minimization problem.

Then you use something like simulated annealing or gradient descent to try different versions of inputs and see if you can find the minimum output. For instance, from Wiki, here's simulated annealing on a traveling salesman program - the thing you want to minimize there is the total distance.

For problems that wouldn't work, well this whole bulletin board system is not something easily expressed as a minimization problem, unless you're talking about the downhill quality race for 8chan comments.

Most problems can be rewritten so that they represent an energy minimization problem. In this picture, the problem is an energy landscape, and the solution is the lowest-possible energy of that landscape. The trick is finding the combination of bit values that represents that energy.

How exactly is this rewriting done? And what problems fall under "most problems"? Are there classes of problems for which this rewriting simply cannot be done?

We would need someone actually good at quantum computing theory for better and more specific answers.

Meanwhile, in a general sense the appeal of quantum computing is to solve problems that are currently NP hard. Which means they require so much time or space as the number of items solved for increases that they soon cannot complete before the heat death of the universe or require more storage than the size of the universe. A number of these we think / have proven will fall to quantum computing. For instance most current cryptography schemes will break and need quantum crypto algorithms to replace them.

Nevertheless, that leaves the even harder problems that are as hard for quantum computers (QNP) as NP is for current architectures. Barring a general outbreak of magic, there will always be problems that are too hard.

Getting back to your question about types of problems here is an analogy. Currently computers are adding more and more cores. Supercomputers have huge amounts of cores. There are also various memory schemes attached. Now if your problem is highly parallel then you can very efficiently execute on a supercomputer. Or if you are doing 3D graphics you can very efficiently execute on a GPU because that is very parallel. But if for instance you are doing some of the machine learning algorithms where your result depends on calculations across all or most of the inputs all the cores in the world do not help. You just need fewer or one but extremely fast core(s) because that is the bottleneck. Other algorithms have hard memory requirements, you need a huge amount of unified memory that is always in sync. Classical databases are like that. You can execute faster on a giant slow memory pool than lots of super fast cache memory in each core. Harder problems need combinations of these three solutions.

The bottom line is that already there is a huge variety in algorithms and exactly what they bottleneck on. Quantum computers can speed up a subset of those but will choke on others. It seems likely there will be many quantum architectures that specialize in some set of problems, just like the computers of today. D-Wave may just always be the go to outfit for annealing problems.

tl;dr Yep, there are classes of problems that cannot be rewritten that way.

Most problems can be rewritten so that they represent an energy minimization problem. In this picture, the problem is an energy landscape, and the solution is the lowest-possible energy of that landscape. The trick is finding the combination of bit values that represents that energy.

How exactly is this rewriting done? And what problems fall under "most problems"? Are there classes of problems for which this rewriting simply cannot be done?

For instance most current cryptography schemes will break and need quantum crypto algorithms to replace them.

IIRC, only certain asymmetric crypto is susceptible to known quantum algorithms. Symmetric crypto is safe. Lattice-based asymmetric crypto is thought to be safe.

There's lots of work in what are called "post-quantum" crypto algorithms. They do not seem to have to be quantum, themselves, in order to be resistant to breaking by quantum computers.

Note, however, that we are not sure we've thought of every possible quantum algorithm, so we can really only say a form of crypto is safe from currently known quantum algorithms.

If (and it's a big if) these systems scales to three or four order of magnitude more qubits, a classical computer simulating that annealing will never finish. So at least there's a doesn't-break-rules-of-physics path to massive performance improvements that doesn't seem to be available from classical computers.

I remember a while ago that Are reported on the debate as to whether D-Wave's computers actually end up incorporating quantum effects. The point being that even if the qubits are just acting classically they will eventually find their way to the minimal state, it just might take a little longer.

I remember a while ago that Are reported on the debate as to whether D-Wave's computers actually end up incorporating quantum effects. The point being that even if the qubits are just acting classically they will eventually find their way to the minimal state, it just might take a little longer.

Has that debate ever been settled?

Chris did write an article where he stated that the qubits were acting, at least in part, with some quantum weirdness. Obviously they're not doing the linked-to-all-neighbors-level of coherence, but there's certainly some overlap of states.

Most problems can be rewritten so that they represent an energy minimization problem. In this picture, the problem is an energy landscape, and the solution is the lowest-possible energy of that landscape. The trick is finding the combination of bit values that represents that energy.

How exactly is this rewriting done? And what problems fall under "most problems"? Are there classes of problems for which this rewriting simply cannot be done?

You turn the problem into an program (or equation, which is a non-branching program) with multiple inputs and a single output for which you want to minimize the output value.

You can do this with a little creativity for a lot of problems. Even something as complex as voice recognition could be reduced to 'find the minimum distance between the word I heard and all other words' and then another one that tries to minimize the 'nonsense factor' of the sentence made of the words you think you heard. At first glance this may seem too complex to actually do, but when you train a deep neural network you are essentially minimizing the error output, so anything a DNN can solve is a type of minimization problem.

Then you use something like simulated annealing or gradient descent to try different versions of inputs and see if you can find the minimum output. For instance, from Wiki, here's simulated annealing on a traveling salesman program - the thing you want to minimize there is the total distance.

For problems that wouldn't work, well this whole bulletin board system is not something easily expressed as a minimization problem, unless you're talking about the downhill quality race for 8chan comments.

The basic process is:• Build an Ising model of the problem• Convert the Ising model to a QUBO (Quadratic Unbounded Binary Optimisation)• Execute the QUBO on an Annealer (DWave, simulated, etc.)

I'm from a programming background rather than a mathematics one so I tend to skip the Ising step as QUBOs in general make more sense to me. For TSP this is how you might break down the problem:• Define the data/metadata for the problem - in TSP this usually means mapping the distance or time between each of the nodes to a 2 dimensional matrix. If the return journey between nodes is the same distance then this means that the TSP problem is symmetrical and therefore harder to solve• Define the objective - which is an N dimensional matrix. For TSP this is usually a 2 dimensional matrix with "cities" and "visit order" being the axes• Label the buckets - each bucket in the objective matrix is given an index, the intersection of the objective choices effectively then maps to the value of a bit/qubit• Define an objective function - which is a mapping of bits/qubits to be annealed to the weight identified in the data/metadata of the problem. No idea how I can draw a table in here but maybe this looks a bit like (for a 3 node problem):

So the objective function would be a mapping of distances to visit order. If you have a true value for x1 and a true value for x5 then you are travelling from a starting city of A to a second city of B and incur a cost of 3 (as per the data matrix). This means your objective function should have a term of 3*x1x5. You repeat this process for each bucket of the matrix to get:

• Next you define the constraints - so in the objective matrix we don't want more than one true value in a row as that would mean that two cities are being visited at the same time. We also only want a single true value in each column as if there are two then that means a city has been visited twice. The constraints boil down to the following:

A QUBO can only have quadratic or linear terms and these are currently non minimisable constraints so square them to get the intersection of terms in quadratic form (remembering the equality x^2 = x):

(x1 + x2 + x3 - 1)^2 = 2x1x2 + 2x1x3 + 2x2x3 -x1 -x2 -x3 + 1

You repeat this for all constraints, then apply a multiplicative factor to make sure the energy penalty is high enough that it never gets breached by the annealing process, add this to the objective function, and voila you have turned a real would problem into an executable minimisation problem.

Full formula with a penalty scaling factor of 10 (greater than all distance weights):

• The last step is to reconstruct the output back into the problem space. The annealing process will return an energy value and a list of all the variable values that achieved that energy value. The variables you can then relate back to the objective function and represent the minimal state as a tour on a map or whatever.

As quoted above, this process is broadly the same but different for each real world problem. In terms of what can and can't be solved as a minimisation problem that's really up to your imagination. Any problem that can be boiled down to discrete approximate choices can be tackled (vertexes on a mesh for physical problems for example). The challenge at the moment after you conceptualise the objective is figuring out how to fit real world problems into constrained oracles like the 2000Q.

I remember a while ago that Are reported on the debate as to whether D-Wave's computers actually end up incorporating quantum effects. The point being that even if the qubits are just acting classically they will eventually find their way to the minimal state, it just might take a little longer.

Has that debate ever been settled?

Current opinion in the field is that they are seeing some quantum speedups, even on the previous generation of chips (the chimera graph topology mentioned in the article). It’s not been a trivial thing to show, and it took several years of people yelling at each other in equations and graphs. It’s not a universal opinion, but that’s the consensus.

I remember a while ago that Are reported on the debate as to whether D-Wave's computers actually end up incorporating quantum effects. The point being that even if the qubits are just acting classically they will eventually find their way to the minimal state, it just might take a little longer.

Has that debate ever been settled?

Current opinion in the field is that they are seeing some quantum speedups, even on the previous generation of chips (the chimera graph topology mentioned in the article). It’s not been a trivial thing to show, and it took several years of people yelling at each other in equations and graphs. It’s not a universal opinion, but that’s the consensus.

Partly I think it is the consensus because the doubters have moved on. The field has largely moved on. There's a lot of excitement about superconducting qubits, but neither the commercial nor the academic efforts are trying to follow D-Wave. They are aiming at architectures that will allow for universal, fault-tolerant quantum computation.

I'm a mechanical engineer, so stuff like digital circuits is outside of my wheelhouse. but I can at least understand logic, gates, and at least a 101-level explanation of how a simple binary computer does arithmetic.

Everytime I read about D-Wave it reminds me of Theranos. It all sounds so hyped.

yeah, this. my lack of knowledge makes this stuff sound like it's not even real. I can visualize how transistors are constructed and at least have a rudimentary understanding how they're arranged in silicon to form an IC. This? I can't even begin to wrap my head around how it could physically be built.

Can you imagine smartphones have quantum computing abilities? And by that I don't mean a quantum CPU necessarily, but the ability to tap into a quantum computing node/server. I certainly can't imagine that, but I know it's coming. What I feel right now must be how early flight pioneers felt looking at the works of von Braun.

yeah, this. my lack of knowledge makes this stuff sound like it's not even real. I can visualize how transistors are constructed and at least have a rudimentary understanding how they're arranged in silicon to form an IC. This? I can't even begin to wrap my head around how it could physically be built.

The physical construction is surprisingly similar to how transistors are constructed. In both cases, you're talking about lithography on a substrate which then defines "active areas" and "interconnects". The materials are different; in a conventional IC, the active areas are doped silicon and other things forming transistors, and the interconnects are copper or some other metal. In a superconducting qbit array like a D-Wave chip, the active areas are essentially loops of some superconducting material (aluminum or niobium typically, with the former being most common in quantum computing). "Loops" is a simplification; you need at least one magic circuit element called a Josephson Junction which I'm not even going to try to explain ("Insert quantum mechanics here") but those are made using the same lithographic and deposition techniques as the rest of the circuit. Interconnects are likewise superconducting wires or coplanar waveguides or similar.

End result is a fairly large chip which goes into some sort of socket that fans out into a whole bunch of microwave connectors of some sort or another. Then, of course, you need to cool the thing down to roughly 0.01 K (which is, surprisingly, off-the-shelf technology these days. It's a big and expensive shelf of course. Google search term: "Helium dilution refrigerator") and plug in a couple of racks worth of room temperature microwave gear and control electronics.

Can you imagine smartphones have quantum computing abilities? And by that I don't mean a quantum CPU necessarily, but the ability to tap into a quantum computing node/server. I certainly can't imagine that, but I know it's coming. What I feel right now must be how early flight pioneers felt looking at the works of von Braun.

At some point, it's pretty likely that this will happen.

My personal expectation is that at some point, quantum computing will be added to Google's/Apple's/other's respective computing clouds in the same exact way that various other accelerators (Google's Tensor ASICs, NVidia's Tesla, etc) have been added to their respective data centers. I'd guess that quantum compute would not be added as cards like the Nvidia Tesla cards, but as separate machines. Instead of having a superscalar processor with specialized units, the data center will have specialized computers which handle specialized workloads. In effect, heterogeneous computing will be at the data center level instead of the compute node level.

Can you imagine smartphones have quantum computing abilities? And by that I don't mean a quantum CPU necessarily, but the ability to tap into a quantum computing node/server. I certainly can't imagine that, but I know it's coming. What I feel right now must be how early flight pioneers felt looking at the works of von Braun.

At some point, it's pretty likely that this will happen.

My personal expectation is that at some point, quantum computing will be added to Google's/Apple's/other's respective computing clouds in the same exact way that various other accelerators (Google's Tensor ASICs, NVidia's Tesla, etc) have been added to their respective data centers. I'd guess that quantum compute would not be added as cards like the Nvidia Tesla cards, but as separate machines. Instead of having a superscalar processor with specialized units, the data center will have specialized computers which handle specialized workloads. In effect, heterogeneous computing will be at the data center level instead of the compute node level.

Which says something about where we are as a species that such a situation is economically viable.

Computers started as massive infrastructure elements that could be time-shared to multiple users over a wide geographic domain. And now data centers are becoming much the same thing. It's just that the pipes are a lot fatter and both the "dumb" terminal and the computer (center) itself are both a lot more powerful.

How big an RSA public key can Dwave now factor?How big an ECC public key can it find the private key of?

Not a very big one - the question is one of prime factorization. The largest number that a DWave has factored (in published literature) is 376289 ( https://arxiv.org/abs/1804.02733 ). The prime factors are usually approximately one half the bit length of the resulting key. So, to 'crack' a 2048 bit RSA key, you need to be able to find two primes factors of size approximately 1024 bits each (note that FIPS 186-4 specifies that factors p and q are EXACTLY 1024-bit).

There is a brute force approach that attempts to leverage an assumption of straight character set range results which (in theory) could work to crack larger keys, but I've only seen it talked about on a dry erase board.

Can you imagine smartphones have quantum computing abilities? And by that I don't mean a quantum CPU necessarily, but the ability to tap into a quantum computing node/server. I certainly can't imagine that, but I know it's coming. What I feel right now must be how early flight pioneers felt looking at the works of von Braun.

At some point, it's pretty likely that this will happen.

My personal expectation is that at some point, quantum computing will be added to Google's/Apple's/other's respective computing clouds in the same exact way that various other accelerators (Google's Tensor ASICs, NVidia's Tesla, etc) have been added to their respective data centers. I'd guess that quantum compute would not be added as cards like the Nvidia Tesla cards, but as separate machines. Instead of having a superscalar processor with specialized units, the data center will have specialized computers which handle specialized workloads. In effect, heterogeneous computing will be at the data center level instead of the compute node level.

DWave offers API based execution of quantum workloads in their cloud connected environments through their LEAP platform. I've already tried this, it is probably well within the realm of reason today to have a mobile app call that API.

Additionally, Rigetti Computing offers their Forest SDK to utilize their Quantum Cloud Services. Not aware of any reason you couldn't either implement calls direct from a mobile platform (or more likely proxy through a web service).

Now, the real question is whether you can do anything productive with it. I think (know) that we are getting to the point where that question's answer is yes. If you want to know more I'm happy to talk about it - chances are you can find my contact info from my username here.

Most problems can be rewritten so that they represent an energy minimization problem. In this picture, the problem is an energy landscape, and the solution is the lowest-possible energy of that landscape. The trick is finding the combination of bit values that represents that energy.

How exactly is this rewriting done? And what problems fall under "most problems"? Are there classes of problems for which this rewriting simply cannot be done?

The TSP example below is a good one. Another good example is solving a Sudoku puzzle - especially when you approach it probabilistically.

Because the DWave can provide not just the one best answer for a problem, but it can execute the 'program' repeatedly and sample the results, you can get distributions of answers.

Take a simple 4x4 sudoku puzzle - given none of the values, how many possible solutions are there?

Since the minimal number of clues required to reach a distinct solution in the 4x4 (2x2) Shi Doku style puzzle is 4 (or 5 or 6), that means that I still have a space of equally probable solutions until I choose the next 3 (or 4 or 5) values which collapses the problem to a single possible solution.

What's interesting is that using the DWave solver on this problem with a number of samples means I can see not just _A_ possible solution, but all of the possible solutions with a proper number of samples and a probability of each being the 'correct' outcome given the known values. With a classical solver or algorithm implementing either a backtracking (brute force) or stochastic search we could find ourselves having to search the entire possible result space in order to find all the possible solutions. While trivial for the Shi Doku style, this rapidly escalates out of reasonable capability with increasing puzzle size.

Well, I had been working on a mobile application which uses the phone camera and digit recognition to characterize the puzzle and send it via the DWave LEAP API (proxied) to find the solution and return the result. The only limit I was facing was the ability for the solution to fit within the available qubit space/topology of the platform - the chain lengths I needed were too long and no embedding was possible. With the new Pegasus topology and higher qubit count this should be a realizable solution.

Is it a productive problem? Not really, but there are a number of abstractions of the problem which are real and tangible and where the existing solutions from Rigetti and DWave are being realized (check out OTI Lumionics and ProteinQure for examples). Also, those interested in the space should take a look at the Creative Destruction Lab at the University of Toronto where they have been incubating companies and concepts that take advantage of this technology for at least 2 years.