True... but I still want to evaluate the difference in speed between that and a lookup-array - "if(array[char])", which would only require 128 bytes (assuming negative values were prefiltered, which they are, and that bytewise operations are faster than bitwise operations, which they also are) and a single L1 cache lookup.

I am assuming Java's native methods use a clever bitwise-and to determine whether the character is alphabetic in a single cycle* without a memory access, but if not, there's no reason to depend on library operations.

*Note - I'm not sure whether this is actually possible, it just seems likely.

Oh, okay. I guess I missed the "merge" step of the assembly then. I just looked at the first sentence and didn't realise Tadpole was only an error corrector / extender:

I'm getting a little confused by this conversation. tadpole does act as an assembler when it isn't in read extend mode, right? I throw reads at it and it generates long contigs (like entire mitochondria). Is gringer's results as they are because in and in2 were used, priming tadpole to be in a different mode like extend only?

I was earlier confused about extend mode thinking that it applied to contigs, but extending my reads and then assembling with longer kmers gave a much better assembly with longer contigs.

The rinse option mentioned I am not sure about, though. How does that affect the output?

I also noticed the newer version of tadpole reports on scaffolds, but I thought it wasn't paired-end aware during assembly? How does the scaffolding come into play?

Sorry for the questions but I feel like there are all sorts of aspects to the tools that I am underutilizing.

In default mode, Tadpole assembles reads and produces contigs. In "extend" or "correct" mode, it will extend or correct input sequences - which can be reads or contigs, but it's designed for reads. When I use Tadpole for assembly, I often first correct the reads, then assemble them, which takes two passes. Tadpole will build contigs unless you explicitly add the flag "mode=extend" or "mode=correct", regardless of whether you have 1 or 2 inputs. In extend or correct mode, it will modify the input reads, and not make contigs.

I'm glad to hear that you've achieved more contiguous assemblies after extending the reads and assembling them with longer kmers - that was my goal in designing that mode (and particularly, to allow merging of non-overlapping reads), but unfortunately I've been too busy to test it thoroughly. You've given me a second data point about it being beneficial, though, so thanks!

"shave" and "rinse" are what some assemblers call "tip removal" and "bubble removal". But, they are implemented a bit differently and occur before the graph is built, rather than as graph simplification routines. As such, they pose virtually no risk of causing misassemblies, and reduce the risk of misassemblies due to single chimeric reads. But unfortunately, in my experience, they also only provide very minor improvements in continuity or error-correction. Sometimes they make subsequent operations faster, though. By default, adding the flag "shave" will remove dead-end kmer paths of depth no more than 1 and length no more than 150 that branch out of a path with substantially greater depth. "rinse", similarly, only removes short paths of depth no more than 1 in which each end terminates in a branch node of substantially greater depth. Because these operations are so conservative, they seem to have little impact. Assemblers like Velvet and AllPaths-LG can collapse bubbles with a 50-50 split as to the path depth, which greatly increases the continuity (particularly with diploid organisms), but poses the risk of misassemblies when there are repeat elements. Tadpole always errs on the side of caution, preferring lower continuity to possible misassemblies.

Tadpole is still not pair-aware and does not perform scaffolding, though that's certainly my next goal, when I get a chance. When you generate contigs, Tadpole automatically runs AssemblyStats (which you can run as standalone using stats.sh). This mentions scaffolds in various places, because it's designed for assemblies that are potentially scaffolded, but you'll note that for Tadpole the scaffold statistics and contig statistics are identical.

Don't feel like you have to use all aspects of Tadpole in order to use it effectively! I am currently using it for mitochondrial assembly also, because it's easy to set a specific depth band to assemble, and thus pull out the mito without the main genome after identifying it on a kmer frequency histogram (in fact, I wrote a script to do this automatically). But in that case I don't actually use the error-correction or extension capabilities, as they are not usually necessary as the coverage is already incredibly high and low-depth kmers are being ignored. I use those more for single-cell work, which has lots of very-low-depth regions.

I'm getting a little confused by this conversation. tadpole does act as an assembler when it isn't in read extend mode, right? I throw reads at it and it generates long contigs (like entire mitochondria). Is gringer's results as they are because in and in2 were used, priming tadpole to be in a different mode like extend only?

I was earlier confused about extend mode thinking that it applied to contigs, but extending my reads and then assembling with longer kmers gave a much better assembly with longer contigs.

The rinse option mentioned I am not sure about, though. How does that affect the output?

I also noticed the newer version of tadpole reports on scaffolds, but I thought it wasn't paired-end aware during assembly? How does the scaffolding come into play?

Sorry for the questions but I feel like there are all sorts of aspects to the tools that I am underutilizing.

Thanks SNPsaurus. I looked at the examples that were provided in the first post and assumed that they were the most appropriate way to run the program. It seems a little odd that an assembler would stop assembling when you give it more parameters. Your comment has cleared this up a bit for me, and now I see that running without any arguments at all gives reasonable results.

I've now got a 3.6kb sequence that has crazy-high coverage (140564.6) which I assume is probably the viral sequence of interest. Here's the command I ran:

Thanks SNPsaurus. I looked at the examples that were provided in the first post and assumed that they were the most appropriate way to run the program. It seems a little odd that an assembler would stop assembling when you give it more parameters. Your comment has cleared this up a bit for me, and now I see that running without any arguments at all gives reasonable results.

I've now got a 3.6kb sequence that has crazy-high coverage (140564.6) which I assume is probably the viral sequence of interest. Here's the command I ran:

Thanks Brian for the notes. I'm actually using it for assembling data that resembles ChIP-Seq data, with high coverage of random shotgun fragments at discrete loci. The mtDNA is just along for the ride!

I do have paired-end data, and your comments about merging makes me wonder if I should add a step to the process.

Right now I take my fastq reads and extend with ecc=true, so extend with error correction. I then assemble at a higher kmer (71).

But would it be better to extend with error correction and then merge the paired reads, and then assemble? The fragments sequenced are longer than the read lengths, so I couldn't merge on the raw sequence files. But after extension they might reach each other. I was thinking of using a tool like http://www.ncbi.nlm.nih.gov/pubmed/26399504 but perhaps read extension and merging would achieve the same results.

If you have super-high coverage like that, at a minimum, increasing K above the default (31) is usually helpful.

Increasing k didn't help. If anything, it made the assembly worse. I've been able to generate 110 contigs with a coverage >500 and average length around 1000bp. I've attached a visual representation of the resulting FASTA file where sequences between homopolymers are coloured based on their GC%,AC%,TC%. Unfortunately, I can't see any common subsequences that would work for further merging.

I've recently discovered that these are transcript sequences, not genomic sequences (despite the initial assurance against that). So the 140,000x coverage and 500x coverage could possibly be for adjacent sequences, and it is [possibly] reasonable to expect target genes with coverage below 500.

Thanks for building this amazing assembler. I have been using it from within the Geneious software package for a few weeks now with great success. Quick question: is it possible to enter a command line under 'custom Tadpole options' that will cause the assembler to only use a specific percentage of the data? Also, is it possible to specify usage of maximum number of reads when performing the assembly (as opposed to a percent)?

I ask because I am trying to assembly a large number of plasmids from MiSeq FASTQ data, and I've found that the results are much better when only using 10% of the data. However, there is a bug wherein this option is missing from the workflow version of Tadpole; and another where it defaults to 100% when attempting to assemble each sequence list separately. They know about the first bug and are fixing it, but I thought there might be a quick workaround via the command line.

Tadpole does not do subsampling; you'd have to first sample 10% of the reads with another tool (such as Reformat, with the "samplerate=0.1" flag). However, you CAN restrict the input to a certain number of reads. "reads=400k", for example, will only use the first 400,000 reads (or if the reads are paired, the first 400,000 pairs).

Tadpole also has "mincr" and "maxcr" flags which allow you to ignore kmers with counts outside of some band, and is sometimes useful for ignoring low-depth junk than can occur when you have too much data:

Also, I recommend you try normalization (targeting maybe 100x coverage) - there are many situations where normalization gives a better assembly than subsampling, particularly when the coverage of features is highly variable.

Thanks very much for your quick feedback. That does help, thank you. I was able to specify reads=1000 and that approximated the 10% for some test cases. I took a quick shot at normalizing, and I'm having some issues with it, but they may have more to do with Geneious than anything.

Don't feel like you have to use all aspects of Tadpole in order to use it effectively! I am currently using it for mitochondrial assembly also, because it's easy to set a specific depth band to assemble, and thus pull out the mito without the main genome after identifying it on a kmer frequency histogram (in fact, I wrote a script to do this automatically). But in that case I don't actually use the error-correction or extension capabilities, as they are not usually necessary as the coverage is already incredibly high and low-depth kmers are being ignored. I use those more for single-cell work, which has lots of very-low-depth regions.

How do you identify the mitochondrial band? Could you share the script?

This works well with error-corrected PacBio reads, which have a very unbiased coverage distribution. It was able to isolate and assemble the mitochondria in 4/4 fungi tested. I'm not sure how well it works on Illumina reads, though. Also, note that the default k=100 is designed for reads of at least 150bp.

This works well with error-corrected PacBio reads, which have a very unbiased coverage distribution. It was able to isolate and assemble the mitochondria in 4/4 fungi tested. I'm not sure how well it works on Illumina reads, though. Also, note that the default k=100 is designed for reads of at least 150bp.

ok, noted. I will test on Illumina reads and report back later. Many thanks.

Hi! Although this is a very helpful post, there is still so much unclarities about Tadpole.
As input, I have fasta file with ~11 mil reads, and I would like for Tadpole to take 50% random reads from this fasta file.
I see that there is reads=-1 input parameter, where -1 (default) stands for all reads, so I am curious what could stand for 50% etc? Cannot find any info on this parameter, so I would appreciate any help!

Hi! Although this is a very helpful post, there is still so much unclarities about Tadpole.
As input, I have fasta file with ~11 mil reads, and I would like for Tadpole to take 50% random reads from this fasta file.
I see that there is reads=-1 input parameter, where -1 (default) stands for all reads, so I am curious what could stand for 50% etc? Cannot find any info on this parameter, so I would appreciate any help!

Provide a number that you want to use e.g. reads=55000000. I don't know if that randomly subsamples though. It may be first 5.5M reads. @Brian will have to clarify.

If you want to independently subsample the reads you could also use reformat.sh or seqtk (sample) from Heng Li.

java.io.EOFException: Unexpected end of ZLIB input stream
at java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:240)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)
at java.util.zip.GZIPInputStream.read(GZIPInputStream.java:117)
at fileIO.ByteFile1.fillBuffer(ByteFile1.java:217)
at fileIO.ByteFile1.nextLine(ByteFile1.java:136)
at fileIO.ByteFile2$BF1Thread.run(ByteFile2.java:264)
open=true
java.io.EOFException: Unexpected end of ZLIB input stream
at java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:240)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)
at java.util.zip.GZIPInputStream.read(GZIPInputStream.java:117)
at fileIO.ByteFile1.fillBuffer(ByteFile1.java:217)
at fileIO.ByteFile1.nextLine(ByteFile1.java:136)
at fileIO.ByteFile2$BF1Thread.run(ByteFile2.java:264)
open=true
Exception in thread "Thread-4" java.lang.AssertionError:
There appear to be different numbers of reads in the paired input files.
The pairing may have been corrupted by an upstream process. It may be fixable by running repair.sh.
at stream.ConcurrentGenericReadInputStream.pair(ConcurrentGenericReadInputStream.java:480)
at stream.ConcurrentGenericReadInputStream.readLists(ConcurrentGenericReadInputStream.java:345)
at stream.ConcurrentGenericReadInputStream.run(ConcurrentGenericReadInputStream.java:189)
at java.lang.Thread.run(Thread.java:745)