Cluster Sequencing with Oxford Nanopore’s GridION System

More on nanopore sequencing this week. I mentioned in my Genetic Future post that the UK sequencing company Oxford Nanopore is somewhat of a dark horse, and an agreement with Illumina has required complete silence about their potential DNA sequencing machines. However, this wasn’t strictly true; Illumina has signed an agreement for the exonuclease sequencing technology, and on that we aren’t likely to hear anything until it is ready.

However, Oxford Nanopore still can, and does, talk about other aspects of their technology. And today, they have released information on their website on the GridION platform, which will be used to run all their nanopore technology (including DNA sequencing and protein analysis). In effect, these are details about the sequencing machine, but with no new specifics about the sequencing process itself.

Here are a few first impressions.

Sequencing in Clusters

The machines are small and low-cost; I expect they will cost the same or less than an Ion Torrent machine. Like the Ion Torrent, MiSeq and GS Junior, the Nanopore machines should be suitable to sit on the bench of a small lab, running small projects and with small budgets and floorspace.

However, this isn’t the full story. Each individual machine is rocking the VCR-machine-circa-1992 look, and the reason for this becomes clear when you see many of them together. The boxes are designed to fit together in standard computing cluster racks, and Oxford Nanopore refer to each of the individual machines as “nodes”. The nodes connect together via a standard network, and can talk to each other, as well as reporting data in real time through the network to other computers. When joined together like this, one machine can be designated as the control node, and during sequencing many nodes can be assigned to sequence the same sample.

So, you could also spend a PacBio like nearly-a-million to buy a rack of these machines to sit in a sequencing facility (incidentally, that’d still be about a fifth of the size of a PacBio machine). Oxford Nanopore is clearly aiming big here; the imagine in the video (to the right) shows a “nanopore cluster” with about 300 individual nodes, which would be a tens-of-million pounds facility. It looks like they aren’t pitching this as a “complementary technology”, sitting alongside existing machines, but as a “take over the major sequencing centers” technology.

Interacting with Sequence in Real-Time

Another aspect that Nanopore is playing up is the ability of the machines to react in real time; the machine can change aspects of its behaviour in response to orders given during sequencing. Some of these will be automatic quality-control changes; the salt concentration and the temperature can change to optimize the sequence speed or quality. The machines can also be given basic preset targets; sequence until we have enough reads, or enough coverage, or a good enough idea of the concentration of a particular protein. This means that instead of running the machine for a set period of time, you can instead run until you have what you want.

There are a couple of things that make this more complicated, and actually pretty cool. You can also load up the machines with up to 96 different samples, so you can decide to sequence one sample until you have enough DNA from it, then move onto another one, and so on. The machines can also talk to each other; for instance, four machines could sequence the same sample, and stop once they had produced enough sequence between them. Finally, the machines has built in APIs to allow them to respond to external programs of arbitrary complexity; for instance, you could connect your machines to a computing cluster that is aligning reads and making variant calls as the sequence runs, and you could decide which sample to sequence next based on the SNP calls from the first.

As I’ve said before, the next batch of sequencing machines is going to raise a whole new set of bioinformatics challanges, as well as requiring us to think about experimental design more carefully to make the most of the tech.

What does this mean for sequencing?

The timing of this release is almost certainly not a coincidence. The Advances in Genome Biology and Technology conference is coming up next week, and this information will probably sit in the minds of those who follow sequencing tech. I cannot help but think that this information serves to reassure researchers and investors that Oxford Nanopore are making headway, despite their radio silence last year. This information, combined with the recent advances in nanopore technology that I talked about earlier this week, show that nanopore technology is making solid advances towards working machines, and that these machines could be game-changers.

The info also allows us to make some inferences about the specs of the machine, while not giving away anything solid. We know that the machines are small and that the nanopores will be distributed in disposable cartridges, and the company has already described the machines as low cost. We can also start making guesses about the throughput of the machines.

The array shown in the video has about 7 000 wells on it; while this is of course going to be for illustration purposes only, let us make the (somewhat baseless) assumption that this is a realistic expectation for the array density. Assuming an occupancy of 35% (the poisson limitation), and a ratchetting speed of 50 bases/second, a single machine would be running about 500 Mbp/hour; this is a 30X genome in a week, or about a quarter of the throughput of the upgraded HiSeq. If they do manage to scale up to “hundreds of thousands of bases”, then the throughput could rise to tens of Gbp/hour.

Of course, the cost and accuracy will also be very important for assessing how good a prospective machine would be, and we really have no idea about either of these yet. However, providing the throughput is high enough, the fact that the technology is single molecule should keep the enzyme cost down.

For lots more information on the system, see the website. They have also produced a helpful video that explains the system:

I have to admit with many of the recent advances in detection for nanopore sequencing I thought Oxford Nanopore would be the principle leaders in the field, but I wasn’t expecting anything so soon.

Whilst the versatility of this is clear for all to see, one wonders whether the bench-top running costs (i.e. the cartridges and reagents) are going to be prohibitive, or whether it will become one of its major strengths from an economical point of view like Fluidigm have marketed with their Biomark equipment and integrated fluidics chips for genotyping and gene expression.

Thanks for both these articles on nanopore technology. The sensitivity, speed, size and power consumption of these machines is at least partially limited by the data conversion technology used at the chemical/digital information interface. Fascinating.

The concept of defining preset targets is incredibly enticing. Stopping midstream when sufficient coverage, evidence, quality, or that one special read is obtained is sorely needed. It will certainly expand the realm of experiments and allow investigators to try harebrained ideas for less cost. My only question is “Who’s going to build the robot to loads all of the Beta Max DNA samples into the sequencing farms?”.

Throughput and accuracy aside, I truly hope this pans out as I can’t wait to tinker with one.